Test Report: KVM_Linux 18063

                    
                      9a5d81419c51a6c3c4fef58cf8d1de8416716248:2024-02-29:33343
                    
                

Test fail (11/332)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (401.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0229 00:56:54.539426  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:10.694957  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:38.379666  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:57.865529  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.870929  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.881192  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.901487  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.941744  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.022090  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.182569  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.503203  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:59.144315  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:00.425150  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:02.987021  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:08.107439  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:18.347894  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:38.829093  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:01:19.790201  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:02:41.711324  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : exit status 109 (6m41.304958948s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-270792] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-270792 in cluster ingress-addon-legacy-270792
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607   51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129   51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336   51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 00:56:48.006560  131854 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:56:48.006832  131854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:56:48.006843  131854 out.go:304] Setting ErrFile to fd 2...
	I0229 00:56:48.006848  131854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:56:48.007068  131854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 00:56:48.007657  131854 out.go:298] Setting JSON to false
	I0229 00:56:48.008588  131854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2359,"bootTime":1709165849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:56:48.008655  131854 start.go:139] virtualization: kvm guest
	I0229 00:56:48.011148  131854 out.go:177] * [ingress-addon-legacy-270792] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 00:56:48.012546  131854 notify.go:220] Checking for updates...
	I0229 00:56:48.014279  131854 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 00:56:48.015648  131854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:56:48.016994  131854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:56:48.018255  131854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:56:48.019564  131854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 00:56:48.020864  131854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 00:56:48.022268  131854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:56:48.058268  131854 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 00:56:48.059381  131854 start.go:299] selected driver: kvm2
	I0229 00:56:48.059394  131854 start.go:903] validating driver "kvm2" against <nil>
	I0229 00:56:48.059405  131854 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 00:56:48.060172  131854 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:56:48.060247  131854 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 00:56:48.074948  131854 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 00:56:48.075025  131854 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:56:48.075272  131854 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 00:56:48.075359  131854 cni.go:84] Creating CNI manager for ""
	I0229 00:56:48.075387  131854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 00:56:48.075401  131854 start_flags.go:323] config:
	{Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:56:48.075576  131854 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:56:48.078002  131854 out.go:177] * Starting control plane node ingress-addon-legacy-270792 in cluster ingress-addon-legacy-270792
	I0229 00:56:48.079309  131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 00:56:48.104388  131854 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 00:56:48.104417  131854 cache.go:56] Caching tarball of preloaded images
	I0229 00:56:48.104553  131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 00:56:48.106397  131854 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 00:56:48.108342  131854 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:56:48.133418  131854 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 00:56:52.478052  131854 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:56:52.478150  131854 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:56:53.261650  131854 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 00:56:53.262023  131854 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json ...
	I0229 00:56:53.262052  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json: {Name:mk2e02e5999fc20f88ce115938f1f2ccbf25a78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:56:53.262222  131854 start.go:365] acquiring machines lock for ingress-addon-legacy-270792: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 00:56:53.262257  131854 start.go:369] acquired machines lock for "ingress-addon-legacy-270792" in 17.673µs
	I0229 00:56:53.262274  131854 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 00:56:53.262353  131854 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 00:56:53.264457  131854 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 00:56:53.264620  131854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 00:56:53.264645  131854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 00:56:53.279093  131854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
	I0229 00:56:53.279633  131854 main.go:141] libmachine: () Calling .GetVersion
	I0229 00:56:53.280231  131854 main.go:141] libmachine: Using API Version  1
	I0229 00:56:53.280252  131854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 00:56:53.280549  131854 main.go:141] libmachine: () Calling .GetMachineName
	I0229 00:56:53.280741  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
	I0229 00:56:53.280877  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:56:53.281040  131854 start.go:159] libmachine.API.Create for "ingress-addon-legacy-270792" (driver="kvm2")
	I0229 00:56:53.281085  131854 client.go:168] LocalClient.Create starting
	I0229 00:56:53.281123  131854 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem
	I0229 00:56:53.281166  131854 main.go:141] libmachine: Decoding PEM data...
	I0229 00:56:53.281187  131854 main.go:141] libmachine: Parsing certificate...
	I0229 00:56:53.281256  131854 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem
	I0229 00:56:53.281283  131854 main.go:141] libmachine: Decoding PEM data...
	I0229 00:56:53.281297  131854 main.go:141] libmachine: Parsing certificate...
	I0229 00:56:53.281323  131854 main.go:141] libmachine: Running pre-create checks...
	I0229 00:56:53.281338  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .PreCreateCheck
	I0229 00:56:53.281673  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
	I0229 00:56:53.282070  131854 main.go:141] libmachine: Creating machine...
	I0229 00:56:53.282086  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Create
	I0229 00:56:53.282224  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating KVM machine...
	I0229 00:56:53.283367  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found existing default KVM network
	I0229 00:56:53.285193  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.285019  131888 network.go:210] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0229 00:56:53.285950  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.285892  131888 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209360}
	I0229 00:56:53.291376  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | trying to create private KVM network mk-ingress-addon-legacy-270792 192.168.50.0/24...
	I0229 00:56:53.353383  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | private KVM network mk-ingress-addon-legacy-270792 192.168.50.0/24 created
	I0229 00:56:53.353411  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.353342  131888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:56:53.353426  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting up store path in /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 ...
	I0229 00:56:53.353441  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Building disk image from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 00:56:53.353637  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Downloading /home/jenkins/minikube-integration/18063-115328/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 00:56:53.573971  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.573849  131888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa...
	I0229 00:56:53.689655  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.689539  131888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/ingress-addon-legacy-270792.rawdisk...
	I0229 00:56:53.689689  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Writing magic tar header
	I0229 00:56:53.689705  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Writing SSH key tar header
	I0229 00:56:53.689719  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.689655  131888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 ...
	I0229 00:56:53.689741  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792
	I0229 00:56:53.689771  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 (perms=drwx------)
	I0229 00:56:53.689830  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines
	I0229 00:56:53.689853  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:56:53.689860  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines (perms=drwxr-xr-x)
	I0229 00:56:53.689870  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube (perms=drwxr-xr-x)
	I0229 00:56:53.689879  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328 (perms=drwxrwxr-x)
	I0229 00:56:53.689888  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 00:56:53.689896  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 00:56:53.689903  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating domain...
	I0229 00:56:53.689986  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328
	I0229 00:56:53.690025  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 00:56:53.690044  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins
	I0229 00:56:53.690061  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home
	I0229 00:56:53.690075  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Skipping /home - not owner
	I0229 00:56:53.691100  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) define libvirt domain using xml: 
	I0229 00:56:53.691124  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <domain type='kvm'>
	I0229 00:56:53.691134  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <name>ingress-addon-legacy-270792</name>
	I0229 00:56:53.691142  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <memory unit='MiB'>4096</memory>
	I0229 00:56:53.691151  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <vcpu>2</vcpu>
	I0229 00:56:53.691169  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <features>
	I0229 00:56:53.691188  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <acpi/>
	I0229 00:56:53.691203  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <apic/>
	I0229 00:56:53.691215  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <pae/>
	I0229 00:56:53.691226  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     
	I0229 00:56:53.691239  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   </features>
	I0229 00:56:53.691252  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <cpu mode='host-passthrough'>
	I0229 00:56:53.691264  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   
	I0229 00:56:53.691273  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   </cpu>
	I0229 00:56:53.691300  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <os>
	I0229 00:56:53.691321  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <type>hvm</type>
	I0229 00:56:53.691329  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <boot dev='cdrom'/>
	I0229 00:56:53.691338  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <boot dev='hd'/>
	I0229 00:56:53.691348  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <bootmenu enable='no'/>
	I0229 00:56:53.691356  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   </os>
	I0229 00:56:53.691365  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   <devices>
	I0229 00:56:53.691379  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <disk type='file' device='cdrom'>
	I0229 00:56:53.691392  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/boot2docker.iso'/>
	I0229 00:56:53.691401  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <target dev='hdc' bus='scsi'/>
	I0229 00:56:53.691410  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <readonly/>
	I0229 00:56:53.691416  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </disk>
	I0229 00:56:53.691425  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <disk type='file' device='disk'>
	I0229 00:56:53.691435  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 00:56:53.691449  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/ingress-addon-legacy-270792.rawdisk'/>
	I0229 00:56:53.691461  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <target dev='hda' bus='virtio'/>
	I0229 00:56:53.691496  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </disk>
	I0229 00:56:53.691520  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <interface type='network'>
	I0229 00:56:53.691534  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <source network='mk-ingress-addon-legacy-270792'/>
	I0229 00:56:53.691543  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <model type='virtio'/>
	I0229 00:56:53.691557  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </interface>
	I0229 00:56:53.691568  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <interface type='network'>
	I0229 00:56:53.691581  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <source network='default'/>
	I0229 00:56:53.691591  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <model type='virtio'/>
	I0229 00:56:53.691617  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </interface>
	I0229 00:56:53.691635  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <serial type='pty'>
	I0229 00:56:53.691649  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <target port='0'/>
	I0229 00:56:53.691660  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </serial>
	I0229 00:56:53.691671  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <console type='pty'>
	I0229 00:56:53.691683  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <target type='serial' port='0'/>
	I0229 00:56:53.691696  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </console>
	I0229 00:56:53.691712  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     <rng model='virtio'>
	I0229 00:56:53.691726  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)       <backend model='random'>/dev/random</backend>
	I0229 00:56:53.691737  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     </rng>
	I0229 00:56:53.691748  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     
	I0229 00:56:53.691758  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)     
	I0229 00:56:53.691770  131854 main.go:141] libmachine: (ingress-addon-legacy-270792)   </devices>
	I0229 00:56:53.691784  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </domain>
	I0229 00:56:53.691800  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) 
	I0229 00:56:53.695942  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:0f:73:06 in network default
	I0229 00:56:53.697064  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring networks are active...
	I0229 00:56:53.697089  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:53.697746  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring network default is active
	I0229 00:56:53.698072  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring network mk-ingress-addon-legacy-270792 is active
	I0229 00:56:53.698562  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Getting domain xml...
	I0229 00:56:53.699192  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating domain...
	I0229 00:56:54.884724  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Waiting to get IP...
	I0229 00:56:54.885452  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:54.885857  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:54.885909  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:54.885846  131888 retry.go:31] will retry after 258.552427ms: waiting for machine to come up
	I0229 00:56:55.146485  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.146940  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.146973  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.146893  131888 retry.go:31] will retry after 247.731338ms: waiting for machine to come up
	I0229 00:56:55.396441  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.396855  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.396881  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.396799  131888 retry.go:31] will retry after 352.513436ms: waiting for machine to come up
	I0229 00:56:55.751356  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.751829  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:55.751862  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.751786  131888 retry.go:31] will retry after 485.622043ms: waiting for machine to come up
	I0229 00:56:56.239539  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:56.239979  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:56.240007  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:56.239930  131888 retry.go:31] will retry after 458.147456ms: waiting for machine to come up
	I0229 00:56:56.699645  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:56.700004  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:56.700047  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:56.699978  131888 retry.go:31] will retry after 887.011958ms: waiting for machine to come up
	I0229 00:56:57.589081  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:57.589501  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:57.589531  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:57.589448  131888 retry.go:31] will retry after 1.150502395s: waiting for machine to come up
	I0229 00:56:58.741244  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:56:58.741603  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:56:58.741627  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:58.741558  131888 retry.go:31] will retry after 1.297235785s: waiting for machine to come up
	I0229 00:57:00.040208  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:00.040569  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:57:00.040592  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:00.040526  131888 retry.go:31] will retry after 1.706919488s: waiting for machine to come up
	I0229 00:57:01.749283  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:01.749749  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:57:01.749773  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:01.749690  131888 retry.go:31] will retry after 2.061316918s: waiting for machine to come up
	I0229 00:57:03.812727  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:03.813159  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:57:03.813196  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:03.813108  131888 retry.go:31] will retry after 2.469155816s: waiting for machine to come up
	I0229 00:57:06.285745  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:06.286135  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:57:06.286161  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:06.286092  131888 retry.go:31] will retry after 3.020885508s: waiting for machine to come up
	I0229 00:57:09.308129  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:09.308482  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
	I0229 00:57:09.308513  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:09.308419  131888 retry.go:31] will retry after 4.542039674s: waiting for machine to come up
	I0229 00:57:13.852515  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:13.852978  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Found IP for machine: 192.168.50.187
	I0229 00:57:13.853007  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has current primary IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:13.853017  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Reserving static IP address...
	I0229 00:57:13.853403  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-270792", mac: "52:54:00:42:62:86", ip: "192.168.50.187"} in network mk-ingress-addon-legacy-270792
	I0229 00:57:13.925337  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Getting to WaitForSSH function...
	I0229 00:57:13.925401  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Reserved static IP address: 192.168.50.187
	I0229 00:57:13.925433  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Waiting for SSH to be available...
	I0229 00:57:13.927859  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:13.928312  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:42:62:86}
	I0229 00:57:13.928344  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:13.928565  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using SSH client type: external
	I0229 00:57:13.928599  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa (-rw-------)
	I0229 00:57:13.928643  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 00:57:13.928663  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | About to run SSH command:
	I0229 00:57:13.928682  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | exit 0
	I0229 00:57:14.057426  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | SSH cmd err, output: <nil>: 
	I0229 00:57:14.057895  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) KVM machine creation complete!
	I0229 00:57:14.058157  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
	I0229 00:57:14.058853  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:14.059079  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:14.059255  131854 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 00:57:14.059274  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
	I0229 00:57:14.060580  131854 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 00:57:14.060595  131854 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 00:57:14.060600  131854 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 00:57:14.060606  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.062536  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.062855  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.062887  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.063052  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.063250  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.063426  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.063539  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.063707  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:14.063945  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:14.063959  131854 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 00:57:14.173151  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 00:57:14.173173  131854 main.go:141] libmachine: Detecting the provisioner...
	I0229 00:57:14.173185  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.175915  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.176247  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.176279  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.176481  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.176694  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.176894  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.177043  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.177182  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:14.177342  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:14.177353  131854 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 00:57:14.286577  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 00:57:14.286641  131854 main.go:141] libmachine: found compatible host: buildroot
	I0229 00:57:14.286648  131854 main.go:141] libmachine: Provisioning with buildroot...
	I0229 00:57:14.286656  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
	I0229 00:57:14.286924  131854 buildroot.go:166] provisioning hostname "ingress-addon-legacy-270792"
	I0229 00:57:14.286955  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
	I0229 00:57:14.287161  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.289612  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.289966  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.289997  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.290121  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.290305  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.290464  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.290603  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.290786  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:14.290951  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:14.290964  131854 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-270792 && echo "ingress-addon-legacy-270792" | sudo tee /etc/hostname
	I0229 00:57:14.412326  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-270792
	
	I0229 00:57:14.412368  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.415089  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.415380  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.415415  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.415632  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.415826  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.415997  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.416263  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.416489  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:14.416704  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:14.416725  131854 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-270792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-270792/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-270792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 00:57:14.536262  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 00:57:14.536292  131854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 00:57:14.536309  131854 buildroot.go:174] setting up certificates
	I0229 00:57:14.536321  131854 provision.go:83] configureAuth start
	I0229 00:57:14.536331  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
	I0229 00:57:14.536676  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
	I0229 00:57:14.539109  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.539499  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.539541  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.539650  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.541833  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.542199  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.542220  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.542356  131854 provision.go:138] copyHostCerts
	I0229 00:57:14.542389  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 00:57:14.542428  131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 00:57:14.542447  131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 00:57:14.542525  131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 00:57:14.542614  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 00:57:14.542637  131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 00:57:14.542648  131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 00:57:14.542684  131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 00:57:14.542744  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 00:57:14.542768  131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 00:57:14.542777  131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 00:57:14.542808  131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 00:57:14.542872  131854 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-270792 san=[192.168.50.187 192.168.50.187 localhost 127.0.0.1 minikube ingress-addon-legacy-270792]
	I0229 00:57:14.736454  131854 provision.go:172] copyRemoteCerts
	I0229 00:57:14.736518  131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 00:57:14.736545  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.739491  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.739827  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.739858  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.740008  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.740267  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.740450  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.740611  131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 00:57:14.825228  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 00:57:14.825299  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 00:57:14.850239  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 00:57:14.850323  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 00:57:14.874442  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 00:57:14.874511  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0229 00:57:14.898203  131854 provision.go:86] duration metric: configureAuth took 361.866166ms
	I0229 00:57:14.898233  131854 buildroot.go:189] setting minikube options for container-runtime
	I0229 00:57:14.898450  131854 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 00:57:14.898480  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:14.898788  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:14.901489  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.901915  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:14.901945  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:14.902118  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:14.902302  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.902513  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:14.902695  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:14.902881  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:14.903046  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:14.903058  131854 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 00:57:15.011377  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 00:57:15.011401  131854 buildroot.go:70] root file system type: tmpfs
	I0229 00:57:15.011541  131854 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 00:57:15.011571  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:15.014048  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.014353  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:15.014381  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.014592  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:15.014776  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:15.014938  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:15.015112  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:15.015257  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:15.015456  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:15.015543  131854 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 00:57:15.140512  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 00:57:15.140542  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:15.143344  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.143700  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:15.143730  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.143895  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:15.144095  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:15.144297  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:15.144403  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:15.144595  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:15.144751  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:15.144766  131854 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 00:57:15.917893  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 00:57:15.917915  131854 main.go:141] libmachine: Checking connection to Docker...
	I0229 00:57:15.917925  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetURL
	I0229 00:57:15.919252  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using libvirt version 6000000
	I0229 00:57:15.921561  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.921916  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:15.921953  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.922143  131854 main.go:141] libmachine: Docker is up and running!
	I0229 00:57:15.922161  131854 main.go:141] libmachine: Reticulating splines...
	I0229 00:57:15.922170  131854 client.go:171] LocalClient.Create took 22.641073032s
	I0229 00:57:15.922196  131854 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-270792" took 22.64115801s
	I0229 00:57:15.922208  131854 start.go:300] post-start starting for "ingress-addon-legacy-270792" (driver="kvm2")
	I0229 00:57:15.922221  131854 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 00:57:15.922240  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:15.922541  131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 00:57:15.922565  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:15.924877  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.925176  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:15.925205  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:15.925302  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:15.925498  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:15.925668  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:15.925827  131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 00:57:16.013245  131854 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 00:57:16.017500  131854 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 00:57:16.017525  131854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 00:57:16.017588  131854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 00:57:16.017675  131854 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 00:57:16.017688  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> /etc/ssl/certs/1225952.pem
	I0229 00:57:16.017769  131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 00:57:16.027853  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 00:57:16.052066  131854 start.go:303] post-start completed in 129.839713ms
	I0229 00:57:16.052118  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
	I0229 00:57:16.052663  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
	I0229 00:57:16.055188  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.055574  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:16.055598  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.055830  131854 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json ...
	I0229 00:57:16.056038  131854 start.go:128] duration metric: createHost completed in 22.793665743s
	I0229 00:57:16.056071  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:16.058312  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.058654  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:16.058681  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.058810  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:16.058981  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:16.059157  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:16.059296  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:16.059473  131854 main.go:141] libmachine: Using SSH client type: native
	I0229 00:57:16.059634  131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.187 22 <nil> <nil>}
	I0229 00:57:16.059644  131854 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 00:57:16.170405  131854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709168236.142294246
	
	I0229 00:57:16.170431  131854 fix.go:206] guest clock: 1709168236.142294246
	I0229 00:57:16.170438  131854 fix.go:219] Guest: 2024-02-29 00:57:16.142294246 +0000 UTC Remote: 2024-02-29 00:57:16.056052898 +0000 UTC m=+28.099227171 (delta=86.241348ms)
	I0229 00:57:16.170459  131854 fix.go:190] guest clock delta is within tolerance: 86.241348ms
	I0229 00:57:16.170464  131854 start.go:83] releasing machines lock for "ingress-addon-legacy-270792", held for 22.908197836s
	I0229 00:57:16.170483  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:16.170749  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
	I0229 00:57:16.173371  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.173661  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:16.173699  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.173857  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:16.174510  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:16.174712  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 00:57:16.174773  131854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 00:57:16.174833  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:16.174976  131854 ssh_runner.go:195] Run: cat /version.json
	I0229 00:57:16.175003  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 00:57:16.177483  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.177710  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.177859  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:16.177890  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.178032  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:16.178128  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:16.178163  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:16.178230  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:16.178278  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 00:57:16.178425  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:16.178497  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 00:57:16.178578  131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 00:57:16.178667  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 00:57:16.178780  131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 00:57:16.279726  131854 ssh_runner.go:195] Run: systemctl --version
	I0229 00:57:16.285982  131854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 00:57:16.291689  131854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 00:57:16.291760  131854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 00:57:16.301707  131854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 00:57:16.321505  131854 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 00:57:16.321540  131854 start.go:475] detecting cgroup driver to use...
	I0229 00:57:16.321661  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 00:57:16.346146  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 00:57:16.360056  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 00:57:16.371726  131854 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 00:57:16.371802  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 00:57:16.382190  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 00:57:16.392089  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 00:57:16.402511  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 00:57:16.412807  131854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 00:57:16.423080  131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 00:57:16.433206  131854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 00:57:16.442261  131854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 00:57:16.451149  131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:57:16.560936  131854 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 00:57:16.588457  131854 start.go:475] detecting cgroup driver to use...
	I0229 00:57:16.588562  131854 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 00:57:16.608488  131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 00:57:16.622636  131854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 00:57:16.641147  131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 00:57:16.654317  131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 00:57:16.666793  131854 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 00:57:16.696033  131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 00:57:16.709644  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 00:57:16.728600  131854 ssh_runner.go:195] Run: which cri-dockerd
	I0229 00:57:16.732535  131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 00:57:16.741858  131854 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 00:57:16.759377  131854 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 00:57:16.872931  131854 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 00:57:17.004655  131854 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 00:57:17.004799  131854 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 00:57:17.022087  131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:57:17.135572  131854 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 00:57:18.509258  131854 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.373650374s)
	I0229 00:57:18.509336  131854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 00:57:18.539017  131854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 00:57:18.565331  131854 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0229 00:57:18.565376  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
	I0229 00:57:18.567984  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:18.568359  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 00:57:18.568385  131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 00:57:18.568569  131854 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 00:57:18.572910  131854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 00:57:18.585836  131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 00:57:18.585886  131854 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 00:57:18.601577  131854 docker.go:685] Got preloaded images: 
	I0229 00:57:18.601594  131854 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 00:57:18.601642  131854 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 00:57:18.611265  131854 ssh_runner.go:195] Run: which lz4
	I0229 00:57:18.615078  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 00:57:18.615169  131854 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 00:57:18.619557  131854 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 00:57:18.619592  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 00:57:20.100666  131854 docker.go:649] Took 1.485514 seconds to copy over tarball
	I0229 00:57:20.100758  131854 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 00:57:22.276056  131854 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.175257365s)
	I0229 00:57:22.276096  131854 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 00:57:22.315638  131854 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 00:57:22.326621  131854 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 00:57:22.345988  131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 00:57:22.468935  131854 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 00:57:26.881060  131854 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.412082706s)
	I0229 00:57:26.881145  131854 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 00:57:26.898886  131854 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 00:57:26.898905  131854 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 00:57:26.898914  131854 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 00:57:26.900531  131854 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 00:57:26.900549  131854 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 00:57:26.900531  131854 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 00:57:26.900609  131854 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 00:57:26.900531  131854 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 00:57:26.900540  131854 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 00:57:26.900531  131854 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 00:57:26.900546  131854 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 00:57:26.901543  131854 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 00:57:26.901603  131854 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 00:57:26.901620  131854 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 00:57:26.901657  131854 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 00:57:26.901711  131854 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 00:57:26.901657  131854 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 00:57:26.901542  131854 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 00:57:26.901542  131854 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 00:57:27.034919  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 00:57:27.045554  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 00:57:27.047766  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 00:57:27.053166  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 00:57:27.054047  131854 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 00:57:27.054098  131854 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 00:57:27.054139  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 00:57:27.058802  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 00:57:27.063213  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 00:57:27.070023  131854 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 00:57:27.070077  131854 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 00:57:27.070121  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 00:57:27.083565  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 00:57:27.100745  131854 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 00:57:27.100796  131854 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 00:57:27.100839  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 00:57:27.123507  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 00:57:27.124344  131854 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 00:57:27.124385  131854 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 00:57:27.124426  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 00:57:27.134567  131854 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 00:57:27.134602  131854 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 00:57:27.134630  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 00:57:27.134643  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 00:57:27.134778  131854 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 00:57:27.134816  131854 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 00:57:27.134874  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 00:57:27.144231  131854 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 00:57:27.144267  131854 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 00:57:27.144308  131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 00:57:27.160261  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 00:57:27.177560  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 00:57:27.178957  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 00:57:27.185048  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 00:57:27.190009  131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 00:57:27.487476  131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 00:57:27.505673  131854 cache_images.go:92] LoadImages completed in 606.743591ms
	W0229 00:57:27.505767  131854 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0229 00:57:27.505853  131854 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 00:57:27.535292  131854 cni.go:84] Creating CNI manager for ""
	I0229 00:57:27.535311  131854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 00:57:27.535345  131854 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 00:57:27.535370  131854 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.187 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-270792 NodeName:ingress-addon-legacy-270792 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 00:57:27.535539  131854 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-270792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 00:57:27.535622  131854 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-270792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 00:57:27.535680  131854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 00:57:27.545659  131854 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 00:57:27.545721  131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 00:57:27.556011  131854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0229 00:57:27.572878  131854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 00:57:27.589683  131854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2130 bytes)
	I0229 00:57:27.606173  131854 ssh_runner.go:195] Run: grep 192.168.50.187	control-plane.minikube.internal$ /etc/hosts
	I0229 00:57:27.609949  131854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 00:57:27.621658  131854 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792 for IP: 192.168.50.187
	I0229 00:57:27.621694  131854 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:27.621877  131854 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 00:57:27.621915  131854 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 00:57:27.621957  131854 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key
	I0229 00:57:27.621969  131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt with IP's: []
	I0229 00:57:27.812961  131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt ...
	I0229 00:57:27.812993  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt: {Name:mkfd2e599baea25b414b240f7a7347f9b074f404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:27.813155  131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key ...
	I0229 00:57:27.813171  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key: {Name:mk7893dc33425ec30964686ef54c96a435eef65d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:27.813244  131854 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79
	I0229 00:57:27.813261  131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 with IP's: [192.168.50.187 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 00:57:27.952871  131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 ...
	I0229 00:57:27.952909  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79: {Name:mk8d840108f5ce5b36775ceb882186179d17da57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:27.953063  131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79 ...
	I0229 00:57:27.953077  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79: {Name:mk7e29416ae2d5d7bbd1b81391d721d2e3fb8793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:27.953144  131854 certs.go:337] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt
	I0229 00:57:27.953210  131854 certs.go:341] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key
	I0229 00:57:27.953258  131854 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key
	I0229 00:57:27.953271  131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt with IP's: []
	I0229 00:57:28.158575  131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt ...
	I0229 00:57:28.158611  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt: {Name:mk53acca5cc64b570a221260774610c3bc74e1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:28.158767  131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key ...
	I0229 00:57:28.158781  131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key: {Name:mkf4026af2f407907d1b5d938a2e5a7f64e813eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 00:57:28.158851  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 00:57:28.158877  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 00:57:28.158890  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 00:57:28.158902  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 00:57:28.158917  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 00:57:28.158929  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 00:57:28.158941  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 00:57:28.158952  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 00:57:28.159003  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 00:57:28.159040  131854 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 00:57:28.159049  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 00:57:28.159075  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 00:57:28.159097  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 00:57:28.159117  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 00:57:28.159152  131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 00:57:28.159179  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:57:28.159192  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem -> /usr/share/ca-certificates/122595.pem
	I0229 00:57:28.159204  131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> /usr/share/ca-certificates/1225952.pem
	I0229 00:57:28.159840  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 00:57:28.186394  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 00:57:28.210802  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 00:57:28.235263  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 00:57:28.259408  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 00:57:28.283009  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 00:57:28.306973  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 00:57:28.330869  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 00:57:28.355208  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 00:57:28.379227  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 00:57:28.403192  131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 00:57:28.426826  131854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 00:57:28.443292  131854 ssh_runner.go:195] Run: openssl version
	I0229 00:57:28.448788  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 00:57:28.459987  131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 00:57:28.464527  131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 00:57:28.464581  131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 00:57:28.470312  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 00:57:28.481659  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 00:57:28.492818  131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:57:28.497442  131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:57:28.497493  131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 00:57:28.503104  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 00:57:28.513537  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 00:57:28.524326  131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 00:57:28.528850  131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 00:57:28.528912  131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 00:57:28.534705  131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 00:57:28.545369  131854 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 00:57:28.549706  131854 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 00:57:28.549750  131854 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.187 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:57:28.549924  131854 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 00:57:28.567151  131854 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 00:57:28.576781  131854 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 00:57:28.586034  131854 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 00:57:28.595191  131854 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 00:57:28.595229  131854 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 00:57:28.643915  131854 kubeadm.go:322] W0229 00:57:28.620093    1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 00:57:28.728462  131854 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 00:57:28.759062  131854 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 00:57:28.831499  131854 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 00:57:31.463471  131854 kubeadm.go:322] W0229 00:57:31.440525    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 00:57:31.464345  131854 kubeadm.go:322] W0229 00:57:31.441532    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 00:59:26.459443  131854 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 00:59:26.459610  131854 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 00:59:26.460568  131854 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 00:59:26.460647  131854 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 00:59:26.460744  131854 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 00:59:26.460860  131854 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 00:59:26.460976  131854 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 00:59:26.461082  131854 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 00:59:26.461157  131854 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 00:59:26.461212  131854 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 00:59:26.461281  131854 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 00:59:26.463087  131854 out.go:204]   - Generating certificates and keys ...
	I0229 00:59:26.463179  131854 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 00:59:26.463245  131854 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 00:59:26.463344  131854 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 00:59:26.463395  131854 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 00:59:26.463470  131854 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 00:59:26.463536  131854 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 00:59:26.463606  131854 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 00:59:26.463712  131854 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0229 00:59:26.463758  131854 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 00:59:26.463888  131854 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	I0229 00:59:26.463955  131854 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 00:59:26.464017  131854 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 00:59:26.464057  131854 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 00:59:26.464103  131854 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 00:59:26.464149  131854 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 00:59:26.464197  131854 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 00:59:26.464252  131854 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 00:59:26.464302  131854 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 00:59:26.464357  131854 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 00:59:26.465890  131854 out.go:204]   - Booting up control plane ...
	I0229 00:59:26.465986  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 00:59:26.466068  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 00:59:26.466142  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 00:59:26.466213  131854 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 00:59:26.466347  131854 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 00:59:26.466392  131854 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 00:59:26.466477  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 00:59:26.466641  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 00:59:26.466708  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 00:59:26.466877  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 00:59:26.466936  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 00:59:26.467100  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 00:59:26.467157  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 00:59:26.467338  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 00:59:26.467434  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 00:59:26.467625  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 00:59:26.467634  131854 kubeadm.go:322] 
	I0229 00:59:26.467690  131854 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 00:59:26.467749  131854 kubeadm.go:322] 		timed out waiting for the condition
	I0229 00:59:26.467763  131854 kubeadm.go:322] 
	I0229 00:59:26.467818  131854 kubeadm.go:322] 	This error is likely caused by:
	I0229 00:59:26.467867  131854 kubeadm.go:322] 		- The kubelet is not running
	I0229 00:59:26.467971  131854 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 00:59:26.467982  131854 kubeadm.go:322] 
	I0229 00:59:26.468070  131854 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 00:59:26.468105  131854 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 00:59:26.468133  131854 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 00:59:26.468139  131854 kubeadm.go:322] 
	I0229 00:59:26.468264  131854 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 00:59:26.468372  131854 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 00:59:26.468388  131854 kubeadm.go:322] 
	I0229 00:59:26.468499  131854 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 00:59:26.468574  131854 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 00:59:26.468672  131854 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 00:59:26.468734  131854 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 00:59:26.468773  131854 kubeadm.go:322] 
	W0229 00:59:26.468920  131854 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:57:28.620093    1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:57:31.440525    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:57:31.441532    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:57:28.620093    1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:57:31.440525    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:57:31.441532    1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 00:59:26.469013  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 00:59:27.204472  131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 00:59:27.219158  131854 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 00:59:27.228498  131854 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 00:59:27.228535  131854 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 00:59:27.283617  131854 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 00:59:27.283721  131854 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 00:59:27.484259  131854 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 00:59:27.484363  131854 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 00:59:27.484515  131854 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 00:59:27.626940  131854 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 00:59:27.628113  131854 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 00:59:27.628191  131854 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 00:59:27.756916  131854 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 00:59:27.759155  131854 out.go:204]   - Generating certificates and keys ...
	I0229 00:59:27.759258  131854 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 00:59:27.759348  131854 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 00:59:27.759449  131854 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 00:59:27.759526  131854 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 00:59:27.759616  131854 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 00:59:27.759690  131854 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 00:59:27.759802  131854 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 00:59:27.759903  131854 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 00:59:27.760008  131854 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 00:59:27.763180  131854 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 00:59:27.763245  131854 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 00:59:27.763345  131854 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 00:59:27.894369  131854 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 00:59:28.208408  131854 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 00:59:28.436268  131854 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 00:59:28.804982  131854 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 00:59:28.805742  131854 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 00:59:28.807665  131854 out.go:204]   - Booting up control plane ...
	I0229 00:59:28.807763  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 00:59:28.813940  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 00:59:28.821265  131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 00:59:28.822192  131854 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 00:59:28.824137  131854 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:00:08.826388  131854 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:00:08.827564  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:00:08.829307  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:00:13.828536  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:00:13.828742  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:00:23.829347  131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:00:23.829567  131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:03:28.827609  131854 kubeadm.go:322] 
	I0229 01:03:28.827706  131854 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 01:03:28.827760  131854 kubeadm.go:322] 		timed out waiting for the condition
	I0229 01:03:28.827786  131854 kubeadm.go:322] 
	I0229 01:03:28.827823  131854 kubeadm.go:322] 	This error is likely caused by:
	I0229 01:03:28.827911  131854 kubeadm.go:322] 		- The kubelet is not running
	I0229 01:03:28.828089  131854 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:03:28.828104  131854 kubeadm.go:322] 
	I0229 01:03:28.828222  131854 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:03:28.828283  131854 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 01:03:28.828341  131854 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 01:03:28.828354  131854 kubeadm.go:322] 
	I0229 01:03:28.828491  131854 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:03:28.828594  131854 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 01:03:28.828602  131854 kubeadm.go:322] 
	I0229 01:03:28.828734  131854 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:03:28.828822  131854 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:03:28.828930  131854 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 01:03:28.828988  131854 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 01:03:28.828999  131854 kubeadm.go:322] 
	I0229 01:03:28.829686  131854 kubeadm.go:322] W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 01:03:28.829952  131854 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:03:28.830135  131854 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 01:03:28.830307  131854 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:03:28.830498  131854 kubeadm.go:322] W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:03:28.830621  131854 kubeadm.go:322] W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 01:03:28.830737  131854 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:03:28.830828  131854 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:03:28.830955  131854 kubeadm.go:406] StartCluster complete in 6m0.281208007s
	I0229 01:03:28.831123  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:03:28.848565  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.848583  131854 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:03:28.848639  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:03:28.866889  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.866913  131854 logs.go:278] No container was found matching "etcd"
	I0229 01:03:28.866978  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:03:28.884032  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.884053  131854 logs.go:278] No container was found matching "coredns"
	I0229 01:03:28.884113  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:03:28.903440  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.903459  131854 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:03:28.903508  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:03:28.940982  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.941010  131854 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:03:28.941069  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:03:28.966084  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.966112  131854 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:03:28.966171  131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:03:28.989010  131854 logs.go:276] 0 containers: []
	W0229 01:03:28.989034  131854 logs.go:278] No container was found matching "kindnet"
	I0229 01:03:28.989051  131854 logs.go:123] Gathering logs for kubelet ...
	I0229 01:03:28.989067  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:03:29.023231  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607   51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:03:29.034129  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129   51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:03:29.043868  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336   51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:03:29.050484  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:25 ingress-addon-legacy-270792 kubelet[52057]: F0229 01:03:25.205454   52057 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:03:29.057092  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:26 ingress-addon-legacy-270792 kubelet[52234]: F0229 01:03:26.399816   52234 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 01:03:29.063787  131854 logs.go:138] Found kubelet problem: Feb 29 01:03:27 ingress-addon-legacy-270792 kubelet[52414]: F0229 01:03:27.714680   52414 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:03:29.069823  131854 logs.go:123] Gathering logs for dmesg ...
	I0229 01:03:29.069840  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:03:29.083597  131854 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:03:29.083621  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:03:29.143162  131854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:03:29.143185  131854 logs.go:123] Gathering logs for Docker ...
	I0229 01:03:29.143203  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:03:29.185146  131854 logs.go:123] Gathering logs for container status ...
	I0229 01:03:29.185176  131854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 01:03:29.237855  131854 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:03:29.237906  131854 out.go:239] * 
	* 
	W0229 01:03:29.238098  131854 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:03:29.238131  131854 out.go:239] * 
	* 
	W0229 01:03:29.238964  131854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:03:29.241538  131854 out.go:177] X Problems detected in kubelet:
	I0229 01:03:29.243125  131854 out.go:177]   Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607   51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:03:29.244493  131854 out.go:177]   Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129   51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:03:29.245719  131854 out.go:177]   Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336   51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 01:03:29.248427  131854 out.go:177] 
	W0229 01:03:29.249789  131854 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 00:59:27.271908   18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 00:59:28.809110   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 00:59:28.810288   18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:03:29.249842  131854 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:03:29.249860  131854 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:03:29.251429  131854 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (401.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (103.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-270792 addons enable ingress --alsologtostderr -v=5
E0229 01:04:10.694701  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 01:04:57.865911  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-270792 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m42.842170947s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:03:29.372093  132976 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:03:29.372289  132976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:03:29.372299  132976 out.go:304] Setting ErrFile to fd 2...
	I0229 01:03:29.372302  132976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:03:29.372733  132976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:03:29.373218  132976 mustload.go:65] Loading cluster: ingress-addon-legacy-270792
	I0229 01:03:29.374194  132976 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:03:29.374232  132976 addons.go:597] checking whether the cluster is paused
	I0229 01:03:29.374371  132976 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:03:29.374391  132976 host.go:66] Checking if "ingress-addon-legacy-270792" exists ...
	I0229 01:03:29.374796  132976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:03:29.374854  132976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:03:29.391024  132976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42965
	I0229 01:03:29.391565  132976 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:03:29.392134  132976 main.go:141] libmachine: Using API Version  1
	I0229 01:03:29.392165  132976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:03:29.392504  132976 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:03:29.392700  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
	I0229 01:03:29.394671  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 01:03:29.394922  132976 ssh_runner.go:195] Run: systemctl --version
	I0229 01:03:29.394941  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 01:03:29.397478  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:03:29.398015  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 01:03:29.398037  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:03:29.398207  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 01:03:29.398370  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 01:03:29.398495  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 01:03:29.398633  132976 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 01:03:29.479980  132976 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:03:29.497604  132976 main.go:141] libmachine: Making call to close driver server
	I0229 01:03:29.497639  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:03:29.497930  132976 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:03:29.497951  132976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:03:29.497988  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Closing plugin on server side
	I0229 01:03:29.500443  132976 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:03:29.501992  132976 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:03:29.502011  132976 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-270792"
	I0229 01:03:29.502022  132976 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-270792"
	I0229 01:03:29.502063  132976 host.go:66] Checking if "ingress-addon-legacy-270792" exists ...
	I0229 01:03:29.502464  132976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:03:29.502517  132976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:03:29.517147  132976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0229 01:03:29.517586  132976 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:03:29.518163  132976 main.go:141] libmachine: Using API Version  1
	I0229 01:03:29.518193  132976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:03:29.518504  132976 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:03:29.518971  132976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:03:29.519020  132976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:03:29.533246  132976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0229 01:03:29.533633  132976 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:03:29.534141  132976 main.go:141] libmachine: Using API Version  1
	I0229 01:03:29.534169  132976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:03:29.534523  132976 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:03:29.534830  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
	I0229 01:03:29.536250  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 01:03:29.538138  132976 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:03:29.539369  132976 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 01:03:29.540579  132976 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 01:03:29.541965  132976 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 01:03:29.541980  132976 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 01:03:29.541994  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 01:03:29.544602  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:03:29.545022  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 01:03:29.545052  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:03:29.545227  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 01:03:29.545410  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 01:03:29.545563  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 01:03:29.545736  132976 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 01:03:29.639649  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:29.699893  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:29.699932  132976 retry.go:31] will retry after 229.60813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:29.930513  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:30.019212  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:30.019253  132976 retry.go:31] will retry after 344.55941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:30.364882  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:30.451436  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:30.451475  132976 retry.go:31] will retry after 699.784036ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:31.152453  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:31.240563  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:31.240603  132976 retry.go:31] will retry after 981.158755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:32.222853  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:32.286807  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:32.286839  132976 retry.go:31] will retry after 1.763910397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:34.051807  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:34.121873  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:34.121907  132976 retry.go:31] will retry after 2.189604452s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:36.311870  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:36.392070  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:36.392113  132976 retry.go:31] will retry after 3.557475126s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:39.950286  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:40.047093  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:40.047133  132976 retry.go:31] will retry after 3.627364232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:43.674792  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:43.779550  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:43.779597  132976 retry.go:31] will retry after 5.815976327s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:49.595809  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:03:49.656952  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:03:49.656983  132976 retry.go:31] will retry after 10.877006094s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:00.534360  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:04:00.596020  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:00.596062  132976 retry.go:31] will retry after 17.01757748s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:17.613913  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:04:17.688379  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:17.688430  132976 retry.go:31] will retry after 32.069976262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:49.759334  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:04:49.822018  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:04:49.822065  132976 retry.go:31] will retry after 22.252483837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:12.076466  132976 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 01:05:12.141253  132976 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:12.141347  132976 main.go:141] libmachine: Making call to close driver server
	I0229 01:05:12.141361  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:05:12.141695  132976 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:05:12.141715  132976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:05:12.141729  132976 main.go:141] libmachine: Making call to close driver server
	I0229 01:05:12.141730  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Closing plugin on server side
	I0229 01:05:12.141737  132976 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:05:12.142019  132976 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:05:12.142039  132976 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:05:12.142056  132976 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-270792"
	I0229 01:05:12.144475  132976 out.go:177] * Verifying ingress addon...
	I0229 01:05:12.147287  132976 out.go:177] 
	W0229 01:05:12.148828  132976 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-270792" does not exist: client config: context "ingress-addon-legacy-270792" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-270792" does not exist: client config: context "ingress-addon-legacy-270792" does not exist]
	W0229 01:05:12.148848  132976 out.go:239] * 
	* 
	W0229 01:05:12.150864  132976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:05:12.152428  132976 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792: exit status 6 (249.715019ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:05:12.392770  133253 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-270792" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-270792" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (103.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (94.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-270792 addons enable ingress-dns --alsologtostderr -v=5
E0229 01:05:25.551975  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-270792 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m34.390609815s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:05:12.462168  133283 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:05:12.462313  133283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:05:12.462325  133283 out.go:304] Setting ErrFile to fd 2...
	I0229 01:05:12.462329  133283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:05:12.462514  133283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:05:12.462760  133283 mustload.go:65] Loading cluster: ingress-addon-legacy-270792
	I0229 01:05:12.463091  133283 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:05:12.463112  133283 addons.go:597] checking whether the cluster is paused
	I0229 01:05:12.463190  133283 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:05:12.463203  133283 host.go:66] Checking if "ingress-addon-legacy-270792" exists ...
	I0229 01:05:12.463532  133283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:05:12.463574  133283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:05:12.478860  133283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0229 01:05:12.479292  133283 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:05:12.479875  133283 main.go:141] libmachine: Using API Version  1
	I0229 01:05:12.479893  133283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:05:12.480343  133283 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:05:12.480570  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
	I0229 01:05:12.482296  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 01:05:12.482503  133283 ssh_runner.go:195] Run: systemctl --version
	I0229 01:05:12.482528  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 01:05:12.484805  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:05:12.485234  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 01:05:12.485264  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:05:12.485367  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 01:05:12.485530  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 01:05:12.485705  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 01:05:12.485865  133283 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 01:05:12.567971  133283 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:05:12.587531  133283 main.go:141] libmachine: Making call to close driver server
	I0229 01:05:12.587566  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:05:12.587851  133283 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:05:12.587873  133283 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:05:12.590505  133283 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 01:05:12.592038  133283 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 01:05:12.592055  133283 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-270792"
	I0229 01:05:12.592062  133283 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-270792"
	I0229 01:05:12.592100  133283 host.go:66] Checking if "ingress-addon-legacy-270792" exists ...
	I0229 01:05:12.592369  133283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:05:12.592414  133283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:05:12.606585  133283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36771
	I0229 01:05:12.607034  133283 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:05:12.607625  133283 main.go:141] libmachine: Using API Version  1
	I0229 01:05:12.607649  133283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:05:12.607952  133283 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:05:12.608418  133283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:05:12.608464  133283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:05:12.622271  133283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42591
	I0229 01:05:12.622607  133283 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:05:12.623025  133283 main.go:141] libmachine: Using API Version  1
	I0229 01:05:12.623046  133283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:05:12.623367  133283 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:05:12.623544  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
	I0229 01:05:12.625013  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
	I0229 01:05:12.627050  133283 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 01:05:12.628511  133283 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 01:05:12.628529  133283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 01:05:12.628544  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
	I0229 01:05:12.631338  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:05:12.631779  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
	I0229 01:05:12.631839  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
	I0229 01:05:12.631941  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
	I0229 01:05:12.632127  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
	I0229 01:05:12.632280  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
	I0229 01:05:12.632421  133283 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
	I0229 01:05:12.729167  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:12.835657  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:12.835706  133283 retry.go:31] will retry after 331.348682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:13.167251  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:13.267367  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:13.267424  133283 retry.go:31] will retry after 531.58674ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:13.799185  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:13.862686  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:13.862744  133283 retry.go:31] will retry after 533.635389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:14.396537  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:14.495107  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:14.495159  133283 retry.go:31] will retry after 904.119041ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:15.400305  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:15.464639  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:15.464685  133283 retry.go:31] will retry after 1.839853208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:17.305741  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:17.366446  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:17.366499  133283 retry.go:31] will retry after 1.840721377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:19.208947  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:19.272748  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:19.272791  133283 retry.go:31] will retry after 1.641430628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:20.915696  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:20.980561  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:20.980624  133283 retry.go:31] will retry after 3.274891601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:24.256441  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:24.326271  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:24.326311  133283 retry.go:31] will retry after 3.781061373s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:28.107561  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:28.221535  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:28.221582  133283 retry.go:31] will retry after 8.863408292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:37.088840  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:37.148131  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:37.148163  133283 retry.go:31] will retry after 17.512001489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:54.661955  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:05:54.727037  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:05:54.727072  133283 retry.go:31] will retry after 12.089197035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:06:06.818949  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:06:06.929316  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:06:06.929354  133283 retry.go:31] will retry after 39.789306875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:06:46.722168  133283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 01:06:46.787930  133283 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 01:06:46.788003  133283 main.go:141] libmachine: Making call to close driver server
	I0229 01:06:46.788019  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:06:46.788302  133283 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:06:46.788316  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Closing plugin on server side
	I0229 01:06:46.788320  133283 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:06:46.788332  133283 main.go:141] libmachine: Making call to close driver server
	I0229 01:06:46.788340  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Close
	I0229 01:06:46.788603  133283 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:06:46.788622  133283 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:06:46.788629  133283 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Closing plugin on server side
	I0229 01:06:46.791576  133283 out.go:177] 
	W0229 01:06:46.792999  133283 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0229 01:06:46.793021  133283 out.go:239] * 
	* 
	W0229 01:06:46.795114  133283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:06:46.796431  133283 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792: exit status 6 (248.970889ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:06:47.032756  133523 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-270792" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-270792" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (94.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-270792 -n ingress-addon-legacy-270792: exit status 6 (229.343904ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:06:47.264923  133553 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-270792" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-270792" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
E0229 01:33:00.913334  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : exit status 109 (4m53.345976188s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-011190] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-011190 in cluster kubernetes-upgrade-011190
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:32:56.252250  148329 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:32:56.252464  148329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:56.252479  148329 out.go:304] Setting ErrFile to fd 2...
	I0229 01:32:56.252487  148329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:32:56.252747  148329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:32:56.253963  148329 out.go:298] Setting JSON to false
	I0229 01:32:56.255664  148329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4528,"bootTime":1709165849,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:32:56.255975  148329 start.go:139] virtualization: kvm guest
	I0229 01:32:56.258407  148329 out.go:177] * [kubernetes-upgrade-011190] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:32:56.260134  148329 notify.go:220] Checking for updates...
	I0229 01:32:56.260145  148329 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:32:56.261698  148329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:32:56.263139  148329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:32:56.264398  148329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:32:56.265713  148329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:32:56.267045  148329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:32:56.268877  148329 config.go:182] Loaded profile config "NoKubernetes-548668": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0229 01:32:56.269031  148329 config.go:182] Loaded profile config "gvisor-335344": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 01:32:56.269169  148329 config.go:182] Loaded profile config "running-upgrade-703383": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 01:32:56.269272  148329 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:32:56.313662  148329 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:32:56.314894  148329 start.go:299] selected driver: kvm2
	I0229 01:32:56.314912  148329 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:32:56.314930  148329 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:32:56.316115  148329 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:32:56.316228  148329 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:32:56.335897  148329 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:32:56.335967  148329 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:32:56.336245  148329 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 01:32:56.336356  148329 cni.go:84] Creating CNI manager for ""
	I0229 01:32:56.336391  148329 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:32:56.336404  148329 start_flags.go:323] config:
	{Name:kubernetes-upgrade-011190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:32:56.336625  148329 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:32:56.338588  148329 out.go:177] * Starting control plane node kubernetes-upgrade-011190 in cluster kubernetes-upgrade-011190
	I0229 01:32:56.339830  148329 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:32:56.339873  148329 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 01:32:56.339887  148329 cache.go:56] Caching tarball of preloaded images
	I0229 01:32:56.339991  148329 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:32:56.340006  148329 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 01:32:56.340150  148329 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/config.json ...
	I0229 01:32:56.340187  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/config.json: {Name:mkd4646f57e59726bedd6c82fb053ab737df6081 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:32:56.340353  148329 start.go:365] acquiring machines lock for kubernetes-upgrade-011190: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:33:16.266819  148329 start.go:369] acquired machines lock for "kubernetes-upgrade-011190" in 19.926415024s
	I0229 01:33:16.266886  148329 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-011190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:33:16.267025  148329 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:33:16.269179  148329 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 01:33:16.269394  148329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:33:16.269449  148329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:33:16.285575  148329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37379
	I0229 01:33:16.286097  148329 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:33:16.286702  148329 main.go:141] libmachine: Using API Version  1
	I0229 01:33:16.286721  148329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:33:16.287059  148329 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:33:16.287267  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetMachineName
	I0229 01:33:16.287444  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:16.287608  148329 start.go:159] libmachine.API.Create for "kubernetes-upgrade-011190" (driver="kvm2")
	I0229 01:33:16.287643  148329 client.go:168] LocalClient.Create starting
	I0229 01:33:16.287677  148329 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem
	I0229 01:33:16.287746  148329 main.go:141] libmachine: Decoding PEM data...
	I0229 01:33:16.287772  148329 main.go:141] libmachine: Parsing certificate...
	I0229 01:33:16.287838  148329 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem
	I0229 01:33:16.287872  148329 main.go:141] libmachine: Decoding PEM data...
	I0229 01:33:16.287894  148329 main.go:141] libmachine: Parsing certificate...
	I0229 01:33:16.287919  148329 main.go:141] libmachine: Running pre-create checks...
	I0229 01:33:16.287934  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .PreCreateCheck
	I0229 01:33:16.288272  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetConfigRaw
	I0229 01:33:16.288708  148329 main.go:141] libmachine: Creating machine...
	I0229 01:33:16.288735  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Create
	I0229 01:33:16.288872  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Creating KVM machine...
	I0229 01:33:16.290008  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found existing default KVM network
	I0229 01:33:16.291205  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.291052  148525 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:e9:71} reservation:<nil>}
	I0229 01:33:16.292074  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.291984  148525 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:2a:fd} reservation:<nil>}
	I0229 01:33:16.292982  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.292904  148525 network.go:207] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ca820}
	I0229 01:33:16.298537  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | trying to create private KVM network mk-kubernetes-upgrade-011190 192.168.61.0/24...
	I0229 01:33:16.370568  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | private KVM network mk-kubernetes-upgrade-011190 192.168.61.0/24 created
	I0229 01:33:16.370613  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting up store path in /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190 ...
	I0229 01:33:16.370627  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.370478  148525 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:33:16.370655  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Building disk image from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:33:16.370673  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Downloading /home/jenkins/minikube-integration/18063-115328/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:33:16.619917  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.619776  148525 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa...
	I0229 01:33:16.885683  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.885521  148525 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/kubernetes-upgrade-011190.rawdisk...
	I0229 01:33:16.885722  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Writing magic tar header
	I0229 01:33:16.885741  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Writing SSH key tar header
	I0229 01:33:16.885755  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:16.885633  148525 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190 ...
	I0229 01:33:16.885775  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190
	I0229 01:33:16.885802  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines
	I0229 01:33:16.885812  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:33:16.885827  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190 (perms=drwx------)
	I0229 01:33:16.885852  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:33:16.885864  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube (perms=drwxr-xr-x)
	I0229 01:33:16.885958  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328
	I0229 01:33:16.885995  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:33:16.886020  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328 (perms=drwxrwxr-x)
	I0229 01:33:16.886038  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:33:16.886051  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Checking permissions on dir: /home
	I0229 01:33:16.886064  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Skipping /home - not owner
	I0229 01:33:16.886089  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:33:16.886102  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:33:16.886117  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Creating domain...
	I0229 01:33:16.887204  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) define libvirt domain using xml: 
	I0229 01:33:16.887250  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) <domain type='kvm'>
	I0229 01:33:16.887268  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <name>kubernetes-upgrade-011190</name>
	I0229 01:33:16.887282  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <memory unit='MiB'>2200</memory>
	I0229 01:33:16.887297  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <vcpu>2</vcpu>
	I0229 01:33:16.887309  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <features>
	I0229 01:33:16.887319  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <acpi/>
	I0229 01:33:16.887324  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <apic/>
	I0229 01:33:16.887331  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <pae/>
	I0229 01:33:16.887336  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     
	I0229 01:33:16.887348  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   </features>
	I0229 01:33:16.887362  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <cpu mode='host-passthrough'>
	I0229 01:33:16.887377  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   
	I0229 01:33:16.887389  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   </cpu>
	I0229 01:33:16.887418  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <os>
	I0229 01:33:16.887438  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <type>hvm</type>
	I0229 01:33:16.887449  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <boot dev='cdrom'/>
	I0229 01:33:16.887461  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <boot dev='hd'/>
	I0229 01:33:16.887484  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <bootmenu enable='no'/>
	I0229 01:33:16.887496  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   </os>
	I0229 01:33:16.887535  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   <devices>
	I0229 01:33:16.887561  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <disk type='file' device='cdrom'>
	I0229 01:33:16.887578  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/boot2docker.iso'/>
	I0229 01:33:16.887590  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <target dev='hdc' bus='scsi'/>
	I0229 01:33:16.887603  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <readonly/>
	I0229 01:33:16.887614  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </disk>
	I0229 01:33:16.887625  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <disk type='file' device='disk'>
	I0229 01:33:16.887650  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:33:16.887674  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/kubernetes-upgrade-011190.rawdisk'/>
	I0229 01:33:16.887689  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <target dev='hda' bus='virtio'/>
	I0229 01:33:16.887700  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </disk>
	I0229 01:33:16.887718  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <interface type='network'>
	I0229 01:33:16.887746  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <source network='mk-kubernetes-upgrade-011190'/>
	I0229 01:33:16.887759  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <model type='virtio'/>
	I0229 01:33:16.887779  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </interface>
	I0229 01:33:16.887795  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <interface type='network'>
	I0229 01:33:16.887804  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <source network='default'/>
	I0229 01:33:16.887821  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <model type='virtio'/>
	I0229 01:33:16.887835  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </interface>
	I0229 01:33:16.887846  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <serial type='pty'>
	I0229 01:33:16.887860  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <target port='0'/>
	I0229 01:33:16.887871  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </serial>
	I0229 01:33:16.887883  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <console type='pty'>
	I0229 01:33:16.887906  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <target type='serial' port='0'/>
	I0229 01:33:16.887920  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </console>
	I0229 01:33:16.887931  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     <rng model='virtio'>
	I0229 01:33:16.887943  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)       <backend model='random'>/dev/random</backend>
	I0229 01:33:16.887953  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     </rng>
	I0229 01:33:16.887960  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     
	I0229 01:33:16.887970  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)     
	I0229 01:33:16.887987  148329 main.go:141] libmachine: (kubernetes-upgrade-011190)   </devices>
	I0229 01:33:16.888008  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) </domain>
	I0229 01:33:16.888035  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) 
	I0229 01:33:16.892481  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:1f:ac:d5 in network default
	I0229 01:33:16.893042  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Ensuring networks are active...
	I0229 01:33:16.893080  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:16.893818  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Ensuring network default is active
	I0229 01:33:16.894304  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Ensuring network mk-kubernetes-upgrade-011190 is active
	I0229 01:33:16.894913  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Getting domain xml...
	I0229 01:33:16.895781  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Creating domain...
	I0229 01:33:18.128753  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Waiting to get IP...
	I0229 01:33:18.129721  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.130177  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.130206  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:18.130154  148525 retry.go:31] will retry after 191.649954ms: waiting for machine to come up
	I0229 01:33:18.323805  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.324397  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.324426  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:18.324336  148525 retry.go:31] will retry after 364.47813ms: waiting for machine to come up
	I0229 01:33:18.690428  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.690879  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:18.690906  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:18.690826  148525 retry.go:31] will retry after 333.193716ms: waiting for machine to come up
	I0229 01:33:19.025988  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:19.026718  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:19.026842  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:19.026684  148525 retry.go:31] will retry after 607.198068ms: waiting for machine to come up
	I0229 01:33:19.635098  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:19.635607  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:19.635636  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:19.635579  148525 retry.go:31] will retry after 571.249966ms: waiting for machine to come up
	I0229 01:33:20.208321  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:20.208871  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:20.208897  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:20.208832  148525 retry.go:31] will retry after 751.646547ms: waiting for machine to come up
	I0229 01:33:20.962035  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:20.962575  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:20.962607  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:20.962522  148525 retry.go:31] will retry after 884.614546ms: waiting for machine to come up
	I0229 01:33:21.849313  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:21.849746  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:21.849792  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:21.849681  148525 retry.go:31] will retry after 1.322919463s: waiting for machine to come up
	I0229 01:33:23.174021  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:23.174525  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:23.174549  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:23.174464  148525 retry.go:31] will retry after 1.569767659s: waiting for machine to come up
	I0229 01:33:24.745619  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:24.746160  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:24.746190  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:24.746102  148525 retry.go:31] will retry after 1.84877809s: waiting for machine to come up
	I0229 01:33:26.597198  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:26.597771  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:26.597839  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:26.597700  148525 retry.go:31] will retry after 1.988151572s: waiting for machine to come up
	I0229 01:33:28.588171  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:28.588645  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:28.588679  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:28.588603  148525 retry.go:31] will retry after 2.294864178s: waiting for machine to come up
	I0229 01:33:30.884625  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:30.885037  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:30.885069  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:30.885003  148525 retry.go:31] will retry after 4.430635609s: waiting for machine to come up
	I0229 01:33:35.316774  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:35.317139  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find current IP address of domain kubernetes-upgrade-011190 in network mk-kubernetes-upgrade-011190
	I0229 01:33:35.317163  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | I0229 01:33:35.317097  148525 retry.go:31] will retry after 4.272414704s: waiting for machine to come up
	I0229 01:33:39.593073  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.593582  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Found IP for machine: 192.168.61.22
	I0229 01:33:39.593624  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Reserving static IP address...
	I0229 01:33:39.593640  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has current primary IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.593927  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-011190", mac: "52:54:00:c5:7c:36", ip: "192.168.61.22"} in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.667618  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Getting to WaitForSSH function...
	I0229 01:33:39.667650  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Reserved static IP address: 192.168.61.22
	I0229 01:33:39.667664  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Waiting for SSH to be available...
	I0229 01:33:39.670543  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.670977  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:39.671010  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.671163  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Using SSH client type: external
	I0229 01:33:39.671190  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa (-rw-------)
	I0229 01:33:39.671221  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:33:39.671236  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | About to run SSH command:
	I0229 01:33:39.671253  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | exit 0
	I0229 01:33:39.797820  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | SSH cmd err, output: <nil>: 
	I0229 01:33:39.798098  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) KVM machine creation complete!
	I0229 01:33:39.798328  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetConfigRaw
	I0229 01:33:39.798957  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:39.799171  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:39.799340  148329 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:33:39.799360  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetState
	I0229 01:33:39.800686  148329 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:33:39.800699  148329 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:33:39.800704  148329 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:33:39.800710  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:39.803119  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.803498  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:39.803528  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.803648  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:39.803823  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:39.803985  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:39.804123  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:39.804307  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:39.804496  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:39.804510  148329 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:33:39.913083  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:33:39.913112  148329 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:33:39.913124  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:39.916073  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.916456  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:39.916481  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:39.916689  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:39.916903  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:39.917087  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:39.917246  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:39.917442  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:39.917617  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:39.917628  148329 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:33:40.027231  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:33:40.027309  148329 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:33:40.027321  148329 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:33:40.027336  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetMachineName
	I0229 01:33:40.027637  148329 buildroot.go:166] provisioning hostname "kubernetes-upgrade-011190"
	I0229 01:33:40.027671  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetMachineName
	I0229 01:33:40.027865  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.030546  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.030950  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.030994  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.031158  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.031334  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.031555  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.031718  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.031906  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:40.032116  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:40.032133  148329 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-011190 && echo "kubernetes-upgrade-011190" | sudo tee /etc/hostname
	I0229 01:33:40.152435  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-011190
	
	I0229 01:33:40.152466  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.155494  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.155960  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.155992  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.156218  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.156435  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.156608  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.156814  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.157030  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:40.157253  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:40.157280  148329 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-011190' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-011190/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-011190' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:33:40.275008  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:33:40.275049  148329 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:33:40.275073  148329 buildroot.go:174] setting up certificates
	I0229 01:33:40.275086  148329 provision.go:83] configureAuth start
	I0229 01:33:40.275095  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetMachineName
	I0229 01:33:40.275393  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetIP
	I0229 01:33:40.278253  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.278658  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.278691  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.278824  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.281084  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.281419  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.281459  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.281549  148329 provision.go:138] copyHostCerts
	I0229 01:33:40.281611  148329 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:33:40.281633  148329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:33:40.281714  148329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:33:40.281866  148329 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:33:40.281878  148329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:33:40.281927  148329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:33:40.282025  148329 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:33:40.282054  148329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:33:40.282083  148329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:33:40.282168  148329 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-011190 san=[192.168.61.22 192.168.61.22 localhost 127.0.0.1 minikube kubernetes-upgrade-011190]
	I0229 01:33:40.553105  148329 provision.go:172] copyRemoteCerts
	I0229 01:33:40.553166  148329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:33:40.553191  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.555976  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.556336  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.556365  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.556577  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.556800  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.556962  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.557092  148329 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:33:40.640191  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:33:40.666334  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 01:33:40.692129  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:33:40.719422  148329 provision.go:86] duration metric: configureAuth took 444.322811ms
	I0229 01:33:40.719452  148329 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:33:40.719607  148329 config.go:182] Loaded profile config "kubernetes-upgrade-011190": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 01:33:40.719629  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:40.719926  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.722989  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.723362  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.723414  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.723555  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.723786  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.723959  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.724123  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.724293  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:40.724480  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:40.724493  148329 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:33:40.836540  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:33:40.836564  148329 buildroot.go:70] root file system type: tmpfs
	I0229 01:33:40.836702  148329 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:33:40.836730  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.839535  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.839917  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.839945  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.840091  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.840303  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.840484  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.840676  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.840876  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:40.841045  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:40.841103  148329 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:33:40.970080  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:33:40.970113  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:40.973192  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.973594  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:40.973617  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:40.973822  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:40.974035  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.974206  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:40.974399  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:40.974638  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:40.974824  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:40.974841  148329 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:33:41.752050  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:33:41.752083  148329 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:33:41.752096  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetURL
	I0229 01:33:41.753375  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Using libvirt version 6000000
	I0229 01:33:41.755822  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.756220  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:41.756253  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.756450  148329 main.go:141] libmachine: Docker is up and running!
	I0229 01:33:41.756470  148329 main.go:141] libmachine: Reticulating splines...
	I0229 01:33:41.756479  148329 client.go:171] LocalClient.Create took 25.468827038s
	I0229 01:33:41.756506  148329 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-011190" took 25.468899898s
	I0229 01:33:41.756516  148329 start.go:300] post-start starting for "kubernetes-upgrade-011190" (driver="kvm2")
	I0229 01:33:41.756528  148329 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:33:41.756557  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:41.756815  148329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:33:41.756837  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:41.758961  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.759237  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:41.759262  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.759426  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:41.759583  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:41.759723  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:41.759813  148329 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:33:41.844803  148329 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:33:41.849043  148329 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:33:41.849064  148329 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:33:41.849120  148329 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:33:41.849184  148329 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:33:41.849263  148329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:33:41.859039  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:33:41.884705  148329 start.go:303] post-start completed in 128.175833ms
	I0229 01:33:41.884753  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetConfigRaw
	I0229 01:33:41.885499  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetIP
	I0229 01:33:41.888092  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.888561  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:41.888590  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.888811  148329 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/config.json ...
	I0229 01:33:41.889023  148329 start.go:128] duration metric: createHost completed in 25.621984758s
	I0229 01:33:41.889046  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:41.891320  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.891634  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:41.891662  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:41.891768  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:41.891951  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:41.892122  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:41.892290  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:41.892452  148329 main.go:141] libmachine: Using SSH client type: native
	I0229 01:33:41.892655  148329 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.22 22 <nil> <nil>}
	I0229 01:33:41.892667  148329 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:33:41.998297  148329 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709170421.973942231
	
	I0229 01:33:41.998336  148329 fix.go:206] guest clock: 1709170421.973942231
	I0229 01:33:41.998346  148329 fix.go:219] Guest: 2024-02-29 01:33:41.973942231 +0000 UTC Remote: 2024-02-29 01:33:41.889034144 +0000 UTC m=+45.700244259 (delta=84.908087ms)
	I0229 01:33:41.998371  148329 fix.go:190] guest clock delta is within tolerance: 84.908087ms
	I0229 01:33:41.998377  148329 start.go:83] releasing machines lock for "kubernetes-upgrade-011190", held for 25.731527062s
	I0229 01:33:41.998408  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:41.998739  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetIP
	I0229 01:33:42.001688  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.002161  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:42.002200  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.002382  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:42.003030  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:42.003260  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:33:42.003379  148329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:33:42.003424  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:42.003484  148329 ssh_runner.go:195] Run: cat /version.json
	I0229 01:33:42.003510  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:33:42.006097  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.006166  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.006430  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:42.006453  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.006479  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:42.006497  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:42.006639  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:42.006658  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:33:42.006874  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:42.006912  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:33:42.007062  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:42.007075  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:33:42.007248  148329 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:33:42.007253  148329 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:33:42.090843  148329 ssh_runner.go:195] Run: systemctl --version
	I0229 01:33:42.116456  148329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:33:42.122460  148329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:33:42.122517  148329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 01:33:42.133006  148329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 01:33:42.150443  148329 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:33:42.150490  148329 start.go:475] detecting cgroup driver to use...
	I0229 01:33:42.150655  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:33:42.173860  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 01:33:42.190737  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:33:42.204664  148329 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:33:42.204748  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:33:42.215738  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:33:42.227878  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:33:42.239506  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:33:42.250975  148329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:33:42.262514  148329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:33:42.273746  148329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:33:42.283621  148329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:33:42.295867  148329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:33:42.423542  148329 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:33:42.448219  148329 start.go:475] detecting cgroup driver to use...
	I0229 01:33:42.448309  148329 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:33:42.465405  148329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:33:42.486155  148329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:33:42.512656  148329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:33:42.531479  148329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:33:42.546490  148329 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:33:42.580873  148329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:33:42.597742  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:33:42.620353  148329 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:33:42.625053  148329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:33:42.637683  148329 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:33:42.657378  148329 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:33:42.802420  148329 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:33:42.927907  148329 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:33:42.928076  148329 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:33:42.947141  148329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:33:43.072810  148329 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:33:44.485098  148329 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.412243288s)
	I0229 01:33:44.485183  148329 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:33:44.515881  148329 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:33:44.543808  148329 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 01:33:44.543866  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetIP
	I0229 01:33:44.547111  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:44.547505  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:33:44.547544  148329 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:33:44.547809  148329 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 01:33:44.553359  148329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:33:44.571191  148329 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:33:44.571243  148329 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:33:44.592592  148329 docker.go:685] Got preloaded images: 
	I0229 01:33:44.592608  148329 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:33:44.592647  148329 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:33:44.603183  148329 ssh_runner.go:195] Run: which lz4
	I0229 01:33:44.607719  148329 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:33:44.612416  148329 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:33:44.612449  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 01:33:46.108402  148329 docker.go:649] Took 1.500715 seconds to copy over tarball
	I0229 01:33:46.108482  148329 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:33:48.197360  148329 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08881995s)
	I0229 01:33:48.197405  148329 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:33:48.235016  148329 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:33:48.249025  148329 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 01:33:48.270706  148329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:33:48.389478  148329 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:33:52.865141  148329 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.475620862s)
	I0229 01:33:52.865249  148329 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:33:52.885701  148329 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:33:52.885725  148329 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:33:52.885736  148329 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:33:52.887766  148329 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 01:33:52.887766  148329 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:33:52.887865  148329 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:33:52.887922  148329 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:33:52.888034  148329 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:33:52.888048  148329 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 01:33:52.888082  148329 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:33:52.888083  148329 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:33:52.888588  148329 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:33:52.888590  148329 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 01:33:52.888694  148329 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:33:52.889116  148329 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:33:52.889120  148329 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 01:33:52.889205  148329 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:33:52.889217  148329 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:33:52.889217  148329 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:33:53.026802  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:33:53.030709  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 01:33:53.039393  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 01:33:53.040967  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:33:53.041137  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:33:53.066013  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 01:33:53.067401  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:33:53.068503  148329 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 01:33:53.068545  148329 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:33:53.068545  148329 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 01:33:53.068576  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:33:53.068590  148329 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 01:33:53.068629  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 01:33:53.105364  148329 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 01:33:53.105417  148329 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 01:33:53.105468  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 01:33:53.139863  148329 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 01:33:53.139948  148329 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:33:53.140003  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:33:53.140144  148329 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 01:33:53.140185  148329 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:33:53.140223  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:33:53.157216  148329 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 01:33:53.157262  148329 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:33:53.157284  148329 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 01:33:53.157307  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 01:33:53.157322  148329 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:33:53.157366  148329 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:33:53.157403  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 01:33:53.157462  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 01:33:53.167498  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 01:33:53.179653  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 01:33:53.198641  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 01:33:53.206590  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 01:33:53.206636  148329 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 01:33:53.458953  148329 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:33:53.478583  148329 cache_images.go:92] LoadImages completed in 592.827091ms
	W0229 01:33:53.478691  148329 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0229 01:33:53.478755  148329 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:33:53.504244  148329 cni.go:84] Creating CNI manager for ""
	I0229 01:33:53.504277  148329 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:33:53.504296  148329 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:33:53.504319  148329 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.22 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-011190 NodeName:kubernetes-upgrade-011190 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:33:53.504517  148329 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-011190"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-011190
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.22:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:33:53.504619  148329 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-011190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:33:53.504698  148329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 01:33:53.516357  148329 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:33:53.516462  148329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:33:53.526963  148329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0229 01:33:53.547519  148329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:33:53.568656  148329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2184 bytes)
	I0229 01:33:53.589880  148329 ssh_runner.go:195] Run: grep 192.168.61.22	control-plane.minikube.internal$ /etc/hosts
	I0229 01:33:53.595073  148329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:33:53.611588  148329 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190 for IP: 192.168.61.22
	I0229 01:33:53.611625  148329 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.611795  148329 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:33:53.611833  148329 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:33:53.611876  148329 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key
	I0229 01:33:53.611888  148329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt with IP's: []
	I0229 01:33:53.708566  148329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt ...
	I0229 01:33:53.708595  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt: {Name:mk740d1e4ab26f724a463353e9e32446c9375b81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.708787  148329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key ...
	I0229 01:33:53.708805  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key: {Name:mkecd6bb7912dee79e7752a073c122245a9c8332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.708903  148329 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key.885c1a51
	I0229 01:33:53.708922  148329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt.885c1a51 with IP's: [192.168.61.22 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:33:53.881234  148329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt.885c1a51 ...
	I0229 01:33:53.881273  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt.885c1a51: {Name:mk565db3a1c6cbc316b4c0e82fb23cb74d6e5318 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.881484  148329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key.885c1a51 ...
	I0229 01:33:53.881507  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key.885c1a51: {Name:mk0bcc80c8f0054c7b8655628511339795bb75e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.881605  148329 certs.go:337] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt.885c1a51 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt
	I0229 01:33:53.881690  148329 certs.go:341] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key.885c1a51 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key
	I0229 01:33:53.881752  148329 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key
	I0229 01:33:53.881767  148329 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.crt with IP's: []
	I0229 01:33:53.997171  148329 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.crt ...
	I0229 01:33:53.997206  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.crt: {Name:mkb74c472c53951219c12c82015a0122493feb31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.997413  148329 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key ...
	I0229 01:33:53.997435  148329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key: {Name:mke85255846f5aafe81b0aef5fda7f6433c35258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:33:53.997679  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:33:53.997733  148329 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:33:53.997751  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:33:53.997801  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:33:53.997837  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:33:53.997862  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:33:53.997903  148329 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:33:53.998690  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:33:54.028603  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:33:54.057825  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:33:54.085703  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:33:54.112910  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:33:54.139949  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:33:54.169088  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:33:54.198326  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:33:54.227434  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:33:54.253143  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:33:54.278574  148329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:33:54.305654  148329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:33:54.324785  148329 ssh_runner.go:195] Run: openssl version
	I0229 01:33:54.331179  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:33:54.343905  148329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:33:54.349492  148329 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:33:54.349550  148329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:33:54.356147  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:33:54.371532  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:33:54.386486  148329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:33:54.392766  148329 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:33:54.392831  148329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:33:54.401007  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:33:54.416685  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:33:54.431434  148329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:33:54.436548  148329 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:33:54.436603  148329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:33:54.442534  148329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:33:54.457815  148329 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:33:54.463764  148329 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:33:54.463842  148329 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-011190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:33:54.463989  148329 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:33:54.484884  148329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:33:54.495458  148329 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:33:54.507751  148329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:33:54.519833  148329 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:33:54.519874  148329 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:33:54.732385  148329 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:33:54.772307  148329 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:33:54.941424  148329 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:35:52.440259  148329 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:35:52.440429  148329 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:35:52.441755  148329 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:35:52.441840  148329 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:35:52.441935  148329 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:35:52.442062  148329 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:35:52.442203  148329 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:35:52.442358  148329 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:35:52.442487  148329 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:35:52.442568  148329 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:35:52.442668  148329 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:35:52.444044  148329 out.go:204]   - Generating certificates and keys ...
	I0229 01:35:52.444141  148329 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:35:52.444197  148329 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:35:52.444264  148329 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:35:52.444347  148329 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:35:52.444448  148329 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:35:52.444538  148329 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:35:52.444628  148329 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:35:52.444784  148329 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0229 01:35:52.444867  148329 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:35:52.445063  148329 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	I0229 01:35:52.445158  148329 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:35:52.445226  148329 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:35:52.445281  148329 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:35:52.445385  148329 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:35:52.445469  148329 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:35:52.445543  148329 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:35:52.445661  148329 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:35:52.445732  148329 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:35:52.445848  148329 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:35:52.447195  148329 out.go:204]   - Booting up control plane ...
	I0229 01:35:52.447279  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:35:52.447347  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:35:52.447415  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:35:52.447533  148329 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:35:52.447703  148329 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:35:52.447752  148329 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:35:52.447851  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:35:52.448011  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:35:52.448085  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:35:52.448253  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:35:52.448325  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:35:52.448516  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:35:52.448575  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:35:52.448728  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:35:52.448788  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:35:52.448941  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:35:52.448948  148329 kubeadm.go:322] 
	I0229 01:35:52.448980  148329 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:35:52.449043  148329 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:35:52.449061  148329 kubeadm.go:322] 
	I0229 01:35:52.449091  148329 kubeadm.go:322] This error is likely caused by:
	I0229 01:35:52.449119  148329 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:35:52.449207  148329 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:35:52.449217  148329 kubeadm.go:322] 
	I0229 01:35:52.449306  148329 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:35:52.449336  148329 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:35:52.449367  148329 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:35:52.449373  148329 kubeadm.go:322] 
	I0229 01:35:52.449486  148329 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:35:52.449598  148329 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:35:52.449696  148329 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:35:52.449761  148329 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:35:52.449889  148329 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:35:52.450005  148329 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 01:35:52.450114  148329 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-011190 localhost] and IPs [192.168.61.22 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:35:52.450171  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:35:52.899002  148329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:35:52.917250  148329 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:35:52.929417  148329 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:35:52.929467  148329 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:35:53.090126  148329 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:35:53.122284  148329 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:35:53.205110  148329 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:37:49.037534  148329 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:37:49.037650  148329 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:37:49.039363  148329 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:37:49.039434  148329 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:37:49.039523  148329 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:37:49.039637  148329 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:37:49.039749  148329 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:37:49.039875  148329 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:37:49.039991  148329 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:37:49.040049  148329 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:37:49.040124  148329 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:37:49.042120  148329 out.go:204]   - Generating certificates and keys ...
	I0229 01:37:49.042241  148329 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:37:49.042340  148329 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:37:49.042477  148329 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:37:49.042586  148329 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:37:49.042682  148329 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:37:49.042751  148329 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:37:49.042837  148329 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:37:49.042923  148329 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:37:49.043015  148329 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:37:49.043105  148329 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:37:49.043147  148329 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:37:49.043214  148329 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:37:49.043254  148329 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:37:49.043294  148329 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:37:49.043340  148329 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:37:49.043381  148329 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:37:49.043430  148329 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:37:49.045039  148329 out.go:204]   - Booting up control plane ...
	I0229 01:37:49.045152  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:37:49.045270  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:37:49.045375  148329 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:37:49.045495  148329 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:37:49.045712  148329 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:37:49.045794  148329 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:37:49.045900  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:37:49.046164  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:37:49.046264  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:37:49.046524  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:37:49.046615  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:37:49.046868  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:37:49.046966  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:37:49.047209  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:37:49.047288  148329 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:37:49.047521  148329 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:37:49.047536  148329 kubeadm.go:322] 
	I0229 01:37:49.047587  148329 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:37:49.047621  148329 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:37:49.047627  148329 kubeadm.go:322] 
	I0229 01:37:49.047674  148329 kubeadm.go:322] This error is likely caused by:
	I0229 01:37:49.047723  148329 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:37:49.047879  148329 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:37:49.047889  148329 kubeadm.go:322] 
	I0229 01:37:49.048043  148329 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:37:49.048108  148329 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:37:49.048158  148329 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:37:49.048173  148329 kubeadm.go:322] 
	I0229 01:37:49.048365  148329 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:37:49.048494  148329 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:37:49.048621  148329 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:37:49.048774  148329 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:37:49.048893  148329 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:37:49.049018  148329 kubeadm.go:406] StartCluster complete in 3m54.585196781s
	I0229 01:37:49.049121  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:37:49.049219  148329 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:37:49.070717  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.070743  148329 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:37:49.070820  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:37:49.090033  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.090064  148329 logs.go:278] No container was found matching "etcd"
	I0229 01:37:49.090117  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:37:49.111192  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.111225  148329 logs.go:278] No container was found matching "coredns"
	I0229 01:37:49.111286  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:37:49.131791  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.131826  148329 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:37:49.131896  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:37:49.152616  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.152642  148329 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:37:49.152708  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:37:49.172754  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.172794  148329 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:37:49.172858  148329 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:37:49.197799  148329 logs.go:276] 0 containers: []
	W0229 01:37:49.197842  148329 logs.go:278] No container was found matching "kindnet"
	I0229 01:37:49.197857  148329 logs.go:123] Gathering logs for container status ...
	I0229 01:37:49.197875  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:37:49.273153  148329 logs.go:123] Gathering logs for kubelet ...
	I0229 01:37:49.273187  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:37:49.331459  148329 logs.go:123] Gathering logs for dmesg ...
	I0229 01:37:49.331501  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:37:49.346687  148329 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:37:49.346714  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:37:49.443406  148329 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:37:49.443439  148329 logs.go:123] Gathering logs for Docker ...
	I0229 01:37:49.443458  148329 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0229 01:37:49.513071  148329 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:37:49.513147  148329 out.go:239] * 
	* 
	W0229 01:37:49.513230  148329 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:37:49.513263  148329 out.go:239] * 
	* 
	W0229 01:37:49.514367  148329 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:37:49.519310  148329 out.go:177] 
	W0229 01:37:49.520678  148329 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:37:49.520734  148329 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:37:49.520763  148329 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:37:49.522448  148329 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-011190
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-011190: (3.147825968s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-011190 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-011190 status --format={{.Host}}: exit status 7 (87.288507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
E0229 01:38:09.260749  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (1m11.642969656s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-011190 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (128.011028ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-011190] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-011190
	    minikube start -p kubernetes-upgrade-011190 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0111902 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-011190 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
E0229 01:39:10.694950  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-011190 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (26.074557494s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 01:39:30.760029881 +0000 UTC m=+3190.817942270
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-011190 -n kubernetes-upgrade-011190
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-011190 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-011190 logs -n 25: (1.203530017s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo journalctl                       | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo docker                           | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo                                  | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo cat                              | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo containerd                       | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC |                     |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo systemctl                        | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo find                             | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-579291 sudo crio                             | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-579291                                       | auto-579291   | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC | 29 Feb 24 01:39 UTC |
	| start   | -p calico-579291 --memory=3072                       | calico-579291 | jenkins | v1.32.0 | 29 Feb 24 01:39 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:39:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:39:20.816204  154325 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:39:20.816364  154325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:39:20.816377  154325 out.go:304] Setting ErrFile to fd 2...
	I0229 01:39:20.816383  154325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:39:20.816650  154325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:39:20.817452  154325 out.go:298] Setting JSON to false
	I0229 01:39:20.819012  154325 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4912,"bootTime":1709165849,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:39:20.819172  154325 start.go:139] virtualization: kvm guest
	I0229 01:39:20.821370  154325 out.go:177] * [calico-579291] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:39:20.823177  154325 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:39:20.824670  154325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:39:20.823225  154325 notify.go:220] Checking for updates...
	I0229 01:39:20.827184  154325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:39:20.828512  154325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:39:20.829908  154325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:39:20.831277  154325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:39:20.833257  154325 config.go:182] Loaded profile config "cert-expiration-725953": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:39:20.833370  154325 config.go:182] Loaded profile config "kindnet-579291": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:39:20.833466  154325 config.go:182] Loaded profile config "kubernetes-upgrade-011190": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:39:20.833583  154325 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:39:20.890073  154325 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:39:19.914698  152990 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.790065554s)
	I0229 01:39:19.914795  152990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 01:39:19.936297  152990 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0229 01:39:19.961914  152990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:39:19.980402  152990 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 01:39:20.149611  152990 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 01:39:20.281138  152990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:39:20.413321  152990 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 01:39:20.435168  152990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:39:20.452287  152990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:39:20.618579  152990 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 01:39:20.739939  152990 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 01:39:20.740003  152990 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 01:39:20.750931  152990 start.go:543] Will wait 60s for crictl version
	I0229 01:39:20.751001  152990 ssh_runner.go:195] Run: which crictl
	I0229 01:39:20.757071  152990 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:39:20.824275  152990 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 01:39:20.824331  152990 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:39:20.854106  152990 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:39:20.891441  154325 start.go:299] selected driver: kvm2
	I0229 01:39:20.891465  154325 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:39:20.891504  154325 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:39:20.893241  154325 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:39:20.893356  154325 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:39:20.912701  154325 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:39:20.912766  154325 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:39:20.912950  154325 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:39:20.912998  154325 cni.go:84] Creating CNI manager for "calico"
	I0229 01:39:20.913003  154325 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0229 01:39:20.913012  154325 start_flags.go:323] config:
	{Name:calico-579291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-579291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:39:20.913122  154325 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:39:20.914742  154325 out.go:177] * Starting control plane node calico-579291 in cluster calico-579291
	I0229 01:39:20.916007  154325 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 01:39:20.916057  154325 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 01:39:20.916070  154325 cache.go:56] Caching tarball of preloaded images
	I0229 01:39:20.916200  154325 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:39:20.916214  154325 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 01:39:20.916324  154325 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/config.json ...
	I0229 01:39:20.916350  154325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/config.json: {Name:mk47828a54144e3dd3dbb0e9d37312c6f66038a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:39:20.916517  154325 start.go:365] acquiring machines lock for calico-579291: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:39:20.916570  154325 start.go:369] acquired machines lock for "calico-579291" in 31.808µs
	I0229 01:39:20.916592  154325 start.go:93] Provisioning new machine with config: &{Name:calico-579291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-579291 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:39:20.916690  154325 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:39:21.946091  152469 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004008 seconds
	I0229 01:39:21.946218  152469 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 01:39:21.970096  152469 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 01:39:22.505948  152469 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 01:39:22.506219  152469 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-579291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 01:39:23.022445  152469 kubeadm.go:322] [bootstrap-token] Using token: 8tq6pp.eixkogaoc1r8lsm2
	I0229 01:39:23.024031  152469 out.go:204]   - Configuring RBAC rules ...
	I0229 01:39:23.024203  152469 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 01:39:23.029542  152469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 01:39:23.042316  152469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 01:39:23.049572  152469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 01:39:23.056796  152469 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 01:39:23.066795  152469 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 01:39:23.086351  152469 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 01:39:23.377052  152469 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 01:39:23.437403  152469 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 01:39:23.442577  152469 kubeadm.go:322] 
	I0229 01:39:23.442676  152469 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 01:39:23.442691  152469 kubeadm.go:322] 
	I0229 01:39:23.442795  152469 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 01:39:23.442803  152469 kubeadm.go:322] 
	I0229 01:39:23.442839  152469 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 01:39:23.442927  152469 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 01:39:23.443062  152469 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 01:39:23.443080  152469 kubeadm.go:322] 
	I0229 01:39:23.443155  152469 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 01:39:23.443170  152469 kubeadm.go:322] 
	I0229 01:39:23.443243  152469 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 01:39:23.443259  152469 kubeadm.go:322] 
	I0229 01:39:23.443322  152469 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 01:39:23.443457  152469 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 01:39:23.443545  152469 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 01:39:23.443556  152469 kubeadm.go:322] 
	I0229 01:39:23.443666  152469 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 01:39:23.443776  152469 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 01:39:23.443790  152469 kubeadm.go:322] 
	I0229 01:39:23.443918  152469 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8tq6pp.eixkogaoc1r8lsm2 \
	I0229 01:39:23.444060  152469 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:654440c9a4fe271f917e04cd0d9189c808c68ff608d5ef051056c2b2e0e8c9f9 \
	I0229 01:39:23.444089  152469 kubeadm.go:322] 	--control-plane 
	I0229 01:39:23.444095  152469 kubeadm.go:322] 
	I0229 01:39:23.444195  152469 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 01:39:23.444202  152469 kubeadm.go:322] 
	I0229 01:39:23.444300  152469 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8tq6pp.eixkogaoc1r8lsm2 \
	I0229 01:39:23.444422  152469 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:654440c9a4fe271f917e04cd0d9189c808c68ff608d5ef051056c2b2e0e8c9f9 
	I0229 01:39:23.448295  152469 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:39:23.448334  152469 cni.go:84] Creating CNI manager for "kindnet"
	I0229 01:39:23.449897  152469 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 01:39:23.451240  152469 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 01:39:23.473984  152469 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 01:39:23.474010  152469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 01:39:23.528148  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 01:39:20.893912  152990 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 01:39:20.893971  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetIP
	I0229 01:39:20.897497  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:20.898198  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:39:20.898230  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:20.898310  152990 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 01:39:20.903462  152990 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:39:20.903536  152990 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:39:20.934441  152990 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:39:20.934475  152990 docker.go:615] Images already preloaded, skipping extraction
	I0229 01:39:20.934533  152990 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:39:20.961951  152990 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:39:20.961979  152990 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:39:20.962047  152990 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:39:20.999717  152990 cni.go:84] Creating CNI manager for ""
	I0229 01:39:20.999754  152990 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:39:20.999775  152990 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:39:20.999799  152990 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.22 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-011190 NodeName:kubernetes-upgrade-011190 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs
/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:39:21.000012  152990 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-011190"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:39:21.000104  152990 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-011190 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:39:21.000169  152990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 01:39:21.012968  152990 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:39:21.013045  152990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:39:21.026165  152990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
	I0229 01:39:21.049570  152990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 01:39:21.069758  152990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2116 bytes)
	I0229 01:39:21.090430  152990 ssh_runner.go:195] Run: grep 192.168.61.22	control-plane.minikube.internal$ /etc/hosts
	I0229 01:39:21.094746  152990 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190 for IP: 192.168.61.22
	I0229 01:39:21.094776  152990 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:39:21.094911  152990 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:39:21.094944  152990 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:39:21.095000  152990 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key
	I0229 01:39:21.095037  152990 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key.885c1a51
	I0229 01:39:21.095066  152990 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key
	I0229 01:39:21.095159  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:39:21.095189  152990 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:39:21.095195  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:39:21.095217  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:39:21.095237  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:39:21.095257  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:39:21.095290  152990 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:39:21.095969  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:39:21.123496  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 01:39:21.155108  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:39:21.187158  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:39:21.267652  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:39:21.344472  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:39:21.383974  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:39:21.426273  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:39:21.461849  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:39:21.520731  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:39:21.559164  152990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:39:21.604341  152990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:39:21.628254  152990 ssh_runner.go:195] Run: openssl version
	I0229 01:39:21.635369  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:39:21.656363  152990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:39:21.662096  152990 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:39:21.662211  152990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:39:21.672287  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:39:21.688470  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:39:21.717969  152990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:39:21.724819  152990 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:39:21.724893  152990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:39:21.742807  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:39:21.760803  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:39:21.776411  152990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:39:21.782673  152990 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:39:21.782741  152990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:39:21.790652  152990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:39:21.804431  152990 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:39:21.809430  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:39:21.816452  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:39:21.822937  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:39:21.830872  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:39:21.838301  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:39:21.846173  152990 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:39:21.854082  152990 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-011190 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-011190 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:39:21.854224  152990 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:39:21.877500  152990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:39:21.891271  152990 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:39:21.891293  152990 kubeadm.go:636] restartCluster start
	I0229 01:39:21.891339  152990 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:39:21.903963  152990 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:21.904614  152990 kubeconfig.go:92] found "kubernetes-upgrade-011190" server: "https://192.168.61.22:8443"
	I0229 01:39:21.905495  152990 kapi.go:59] client config for kubernetes-upgrade-011190: &rest.Config{Host:"https://192.168.61.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key", CAFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:39:21.906186  152990 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:39:21.916917  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:21.916968  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:39:21.929706  152990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:22.417038  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:22.417136  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:39:22.431428  152990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:22.917980  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:22.918053  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:39:22.935672  152990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:23.417042  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:23.417158  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:39:23.486867  152990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:23.917057  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:23.917178  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:39:23.935670  152990 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:24.417154  152990 api_server.go:166] Checking apiserver status ...
	I0229 01:39:24.417250  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:39:24.451012  152990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4026/cgroup
	W0229 01:39:24.491573  152990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4026/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:39:24.491654  152990 ssh_runner.go:195] Run: ls
	I0229 01:39:24.500558  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:24.501196  152990 api_server.go:269] stopped: https://192.168.61.22:8443/healthz: Get "https://192.168.61.22:8443/healthz": dial tcp 192.168.61.22:8443: connect: connection refused
	I0229 01:39:24.501265  152990 retry.go:31] will retry after 291.850302ms: state is "Stopped"
	I0229 01:39:20.918409  154325 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 01:39:20.918559  154325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:20.918608  154325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:20.934953  154325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38037
	I0229 01:39:20.935458  154325 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:20.936017  154325 main.go:141] libmachine: Using API Version  1
	I0229 01:39:20.936048  154325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:20.936449  154325 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:20.936637  154325 main.go:141] libmachine: (calico-579291) Calling .GetMachineName
	I0229 01:39:20.936816  154325 main.go:141] libmachine: (calico-579291) Calling .DriverName
	I0229 01:39:20.936997  154325 start.go:159] libmachine.API.Create for "calico-579291" (driver="kvm2")
	I0229 01:39:20.937030  154325 client.go:168] LocalClient.Create starting
	I0229 01:39:20.937066  154325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem
	I0229 01:39:20.937105  154325 main.go:141] libmachine: Decoding PEM data...
	I0229 01:39:20.937119  154325 main.go:141] libmachine: Parsing certificate...
	I0229 01:39:20.937160  154325 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem
	I0229 01:39:20.937179  154325 main.go:141] libmachine: Decoding PEM data...
	I0229 01:39:20.937190  154325 main.go:141] libmachine: Parsing certificate...
	I0229 01:39:20.937204  154325 main.go:141] libmachine: Running pre-create checks...
	I0229 01:39:20.937213  154325 main.go:141] libmachine: (calico-579291) Calling .PreCreateCheck
	I0229 01:39:20.937579  154325 main.go:141] libmachine: (calico-579291) Calling .GetConfigRaw
	I0229 01:39:20.937967  154325 main.go:141] libmachine: Creating machine...
	I0229 01:39:20.937984  154325 main.go:141] libmachine: (calico-579291) Calling .Create
	I0229 01:39:20.938120  154325 main.go:141] libmachine: (calico-579291) Creating KVM machine...
	I0229 01:39:20.939609  154325 main.go:141] libmachine: (calico-579291) DBG | found existing default KVM network
	I0229 01:39:20.941366  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:20.941192  154347 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:30:8e} reservation:<nil>}
	I0229 01:39:20.942555  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:20.942434  154347 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:34:77:39} reservation:<nil>}
	I0229 01:39:20.943384  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:20.943299  154347 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:d8:5f} reservation:<nil>}
	I0229 01:39:20.944593  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:20.944501  154347 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000323d60}
	I0229 01:39:20.949677  154325 main.go:141] libmachine: (calico-579291) DBG | trying to create private KVM network mk-calico-579291 192.168.72.0/24...
	I0229 01:39:21.037944  154325 main.go:141] libmachine: (calico-579291) DBG | private KVM network mk-calico-579291 192.168.72.0/24 created
	I0229 01:39:21.038041  154325 main.go:141] libmachine: (calico-579291) Setting up store path in /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291 ...
	I0229 01:39:21.038155  154325 main.go:141] libmachine: (calico-579291) Building disk image from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:39:21.038190  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:21.038108  154347 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:39:21.038393  154325 main.go:141] libmachine: (calico-579291) Downloading /home/jenkins/minikube-integration/18063-115328/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:39:21.313003  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:21.312875  154347 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291/id_rsa...
	I0229 01:39:21.426451  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:21.426355  154347 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291/calico-579291.rawdisk...
	I0229 01:39:21.426484  154325 main.go:141] libmachine: (calico-579291) DBG | Writing magic tar header
	I0229 01:39:21.426503  154325 main.go:141] libmachine: (calico-579291) DBG | Writing SSH key tar header
	I0229 01:39:21.426572  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:21.426525  154347 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291 ...
	I0229 01:39:21.426695  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291
	I0229 01:39:21.426722  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines
	I0229 01:39:21.426733  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:39:21.426749  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328
	I0229 01:39:21.426758  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:39:21.426771  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291 (perms=drwx------)
	I0229 01:39:21.426780  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:39:21.426793  154325 main.go:141] libmachine: (calico-579291) DBG | Checking permissions on dir: /home
	I0229 01:39:21.426808  154325 main.go:141] libmachine: (calico-579291) DBG | Skipping /home - not owner
	I0229 01:39:21.426824  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:39:21.426838  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube (perms=drwxr-xr-x)
	I0229 01:39:21.426848  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328 (perms=drwxrwxr-x)
	I0229 01:39:21.426868  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:39:21.426881  154325 main.go:141] libmachine: (calico-579291) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:39:21.426898  154325 main.go:141] libmachine: (calico-579291) Creating domain...
	I0229 01:39:21.428258  154325 main.go:141] libmachine: (calico-579291) define libvirt domain using xml: 
	I0229 01:39:21.428281  154325 main.go:141] libmachine: (calico-579291) <domain type='kvm'>
	I0229 01:39:21.428292  154325 main.go:141] libmachine: (calico-579291)   <name>calico-579291</name>
	I0229 01:39:21.428298  154325 main.go:141] libmachine: (calico-579291)   <memory unit='MiB'>3072</memory>
	I0229 01:39:21.428307  154325 main.go:141] libmachine: (calico-579291)   <vcpu>2</vcpu>
	I0229 01:39:21.428314  154325 main.go:141] libmachine: (calico-579291)   <features>
	I0229 01:39:21.428320  154325 main.go:141] libmachine: (calico-579291)     <acpi/>
	I0229 01:39:21.428326  154325 main.go:141] libmachine: (calico-579291)     <apic/>
	I0229 01:39:21.428351  154325 main.go:141] libmachine: (calico-579291)     <pae/>
	I0229 01:39:21.428358  154325 main.go:141] libmachine: (calico-579291)     
	I0229 01:39:21.428367  154325 main.go:141] libmachine: (calico-579291)   </features>
	I0229 01:39:21.428373  154325 main.go:141] libmachine: (calico-579291)   <cpu mode='host-passthrough'>
	I0229 01:39:21.428379  154325 main.go:141] libmachine: (calico-579291)   
	I0229 01:39:21.428384  154325 main.go:141] libmachine: (calico-579291)   </cpu>
	I0229 01:39:21.428392  154325 main.go:141] libmachine: (calico-579291)   <os>
	I0229 01:39:21.428399  154325 main.go:141] libmachine: (calico-579291)     <type>hvm</type>
	I0229 01:39:21.428407  154325 main.go:141] libmachine: (calico-579291)     <boot dev='cdrom'/>
	I0229 01:39:21.428413  154325 main.go:141] libmachine: (calico-579291)     <boot dev='hd'/>
	I0229 01:39:21.428421  154325 main.go:141] libmachine: (calico-579291)     <bootmenu enable='no'/>
	I0229 01:39:21.428429  154325 main.go:141] libmachine: (calico-579291)   </os>
	I0229 01:39:21.428437  154325 main.go:141] libmachine: (calico-579291)   <devices>
	I0229 01:39:21.428446  154325 main.go:141] libmachine: (calico-579291)     <disk type='file' device='cdrom'>
	I0229 01:39:21.428459  154325 main.go:141] libmachine: (calico-579291)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291/boot2docker.iso'/>
	I0229 01:39:21.428466  154325 main.go:141] libmachine: (calico-579291)       <target dev='hdc' bus='scsi'/>
	I0229 01:39:21.428474  154325 main.go:141] libmachine: (calico-579291)       <readonly/>
	I0229 01:39:21.428479  154325 main.go:141] libmachine: (calico-579291)     </disk>
	I0229 01:39:21.428488  154325 main.go:141] libmachine: (calico-579291)     <disk type='file' device='disk'>
	I0229 01:39:21.428497  154325 main.go:141] libmachine: (calico-579291)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:39:21.428509  154325 main.go:141] libmachine: (calico-579291)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/calico-579291/calico-579291.rawdisk'/>
	I0229 01:39:21.428516  154325 main.go:141] libmachine: (calico-579291)       <target dev='hda' bus='virtio'/>
	I0229 01:39:21.428524  154325 main.go:141] libmachine: (calico-579291)     </disk>
	I0229 01:39:21.428531  154325 main.go:141] libmachine: (calico-579291)     <interface type='network'>
	I0229 01:39:21.428540  154325 main.go:141] libmachine: (calico-579291)       <source network='mk-calico-579291'/>
	I0229 01:39:21.428551  154325 main.go:141] libmachine: (calico-579291)       <model type='virtio'/>
	I0229 01:39:21.428559  154325 main.go:141] libmachine: (calico-579291)     </interface>
	I0229 01:39:21.428567  154325 main.go:141] libmachine: (calico-579291)     <interface type='network'>
	I0229 01:39:21.428576  154325 main.go:141] libmachine: (calico-579291)       <source network='default'/>
	I0229 01:39:21.428582  154325 main.go:141] libmachine: (calico-579291)       <model type='virtio'/>
	I0229 01:39:21.428600  154325 main.go:141] libmachine: (calico-579291)     </interface>
	I0229 01:39:21.428607  154325 main.go:141] libmachine: (calico-579291)     <serial type='pty'>
	I0229 01:39:21.428616  154325 main.go:141] libmachine: (calico-579291)       <target port='0'/>
	I0229 01:39:21.428622  154325 main.go:141] libmachine: (calico-579291)     </serial>
	I0229 01:39:21.428633  154325 main.go:141] libmachine: (calico-579291)     <console type='pty'>
	I0229 01:39:21.428640  154325 main.go:141] libmachine: (calico-579291)       <target type='serial' port='0'/>
	I0229 01:39:21.428648  154325 main.go:141] libmachine: (calico-579291)     </console>
	I0229 01:39:21.428655  154325 main.go:141] libmachine: (calico-579291)     <rng model='virtio'>
	I0229 01:39:21.428665  154325 main.go:141] libmachine: (calico-579291)       <backend model='random'>/dev/random</backend>
	I0229 01:39:21.428671  154325 main.go:141] libmachine: (calico-579291)     </rng>
	I0229 01:39:21.428679  154325 main.go:141] libmachine: (calico-579291)     
	I0229 01:39:21.428685  154325 main.go:141] libmachine: (calico-579291)     
	I0229 01:39:21.428693  154325 main.go:141] libmachine: (calico-579291)   </devices>
	I0229 01:39:21.428699  154325 main.go:141] libmachine: (calico-579291) </domain>
	I0229 01:39:21.428708  154325 main.go:141] libmachine: (calico-579291) 
	I0229 01:39:21.432811  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:c1:5c:5f in network default
	I0229 01:39:21.433617  154325 main.go:141] libmachine: (calico-579291) Ensuring networks are active...
	I0229 01:39:21.433645  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:21.434484  154325 main.go:141] libmachine: (calico-579291) Ensuring network default is active
	I0229 01:39:21.434858  154325 main.go:141] libmachine: (calico-579291) Ensuring network mk-calico-579291 is active
	I0229 01:39:21.435469  154325 main.go:141] libmachine: (calico-579291) Getting domain xml...
	I0229 01:39:21.436422  154325 main.go:141] libmachine: (calico-579291) Creating domain...
	I0229 01:39:22.821698  154325 main.go:141] libmachine: (calico-579291) Waiting to get IP...
	I0229 01:39:22.822656  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:22.823168  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:22.823217  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:22.823159  154347 retry.go:31] will retry after 288.521256ms: waiting for machine to come up
	I0229 01:39:23.114039  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:23.114653  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:23.114679  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:23.114614  154347 retry.go:31] will retry after 364.103516ms: waiting for machine to come up
	I0229 01:39:23.480231  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:23.480922  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:23.480950  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:23.480896  154347 retry.go:31] will retry after 344.842708ms: waiting for machine to come up
	I0229 01:39:23.827767  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:23.828548  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:23.828586  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:23.828497  154347 retry.go:31] will retry after 453.383729ms: waiting for machine to come up
	I0229 01:39:24.284140  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:24.284719  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:24.284740  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:24.284667  154347 retry.go:31] will retry after 699.890868ms: waiting for machine to come up
	I0229 01:39:24.986781  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:24.987503  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:24.987535  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:24.987406  154347 retry.go:31] will retry after 948.995935ms: waiting for machine to come up
	I0229 01:39:24.951857  152469 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.42366578s)
	I0229 01:39:24.951910  152469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:39:24.952073  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:24.952163  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f83faa3abac33ca85ff15afa19006ad0a2554d61 minikube.k8s.io/name=kindnet-579291 minikube.k8s.io/updated_at=2024_02_29T01_39_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:25.097763  152469 ops.go:34] apiserver oom_adj: -16
	I0229 01:39:25.097932  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:25.598976  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:26.098739  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:26.599002  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:27.098482  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:27.598486  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:28.098554  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:28.598031  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:29.098141  152469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 01:39:24.794017  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:27.769378  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:39:27.769429  152990 retry.go:31] will retry after 283.886018ms: https://192.168.61.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:39:28.053851  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:28.063110  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:28.063152  152990 retry.go:31] will retry after 380.053229ms: https://192.168.61.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:28.443751  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:28.448433  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:28.448468  152990 retry.go:31] will retry after 432.437398ms: https://192.168.61.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:28.881013  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:28.886327  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:28.886361  152990 retry.go:31] will retry after 601.012054ms: https://192.168.61.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:39:29.488255  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:29.493136  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0229 01:39:29.517190  152990 system_pods.go:86] 7 kube-system pods found
	I0229 01:39:29.517224  152990 system_pods.go:89] "coredns-76f75df574-hz9nh" [3e55e508-027a-4076-990f-cfcbeaa8a090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:39:29.517232  152990 system_pods.go:89] "etcd-kubernetes-upgrade-011190" [bafa312d-e797-4160-9ecd-cc07bbbc06a7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:39:29.517241  152990 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-011190" [a6cd2404-1853-4696-bb09-352e7ea04a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:39:29.517251  152990 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-011190" [b425428e-ec6a-477c-9af3-da7b98cee8eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:39:29.517257  152990 system_pods.go:89] "kube-proxy-wfkqm" [59b844bf-45f1-4218-a6be-da656f7bffa3] Running
	I0229 01:39:29.517263  152990 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-011190" [89b56d29-0226-4180-aa3b-23736ce46095] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:39:29.517272  152990 system_pods.go:89] "storage-provisioner" [721146bb-572f-434a-94eb-d072939c7086] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:39:29.518753  152990 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:39:29.518785  152990 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.22
	I0229 01:39:29.518793  152990 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 01:39:29.518799  152990 kubeadm.go:640] restartCluster took 7.627500303s
	I0229 01:39:29.518807  152990 kubeadm.go:406] StartCluster complete in 7.664733638s
	I0229 01:39:29.518826  152990 settings.go:142] acquiring lock: {Name:mk324b2a181b324166fa2d8da3ad5d1101ca0339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:39:29.518903  152990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:39:29.519661  152990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:39:29.519873  152990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:39:29.520030  152990 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 01:39:29.520108  152990 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-011190"
	I0229 01:39:29.520113  152990 config.go:182] Loaded profile config "kubernetes-upgrade-011190": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:39:29.520126  152990 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-011190"
	I0229 01:39:29.520128  152990 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-011190"
	W0229 01:39:29.520134  152990 addons.go:243] addon storage-provisioner should already be in state true
	I0229 01:39:29.520148  152990 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-011190"
	I0229 01:39:29.520184  152990 host.go:66] Checking if "kubernetes-upgrade-011190" exists ...
	I0229 01:39:29.520211  152990 cache.go:107] acquiring lock: {Name:mkf83f87b4b5efd9201d385629e40dc6af5715f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:39:29.520289  152990 cache.go:115] /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 01:39:29.520299  152990 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 100.557µs
	I0229 01:39:29.520308  152990 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 01:39:29.520317  152990 cache.go:87] Successfully saved all images to host disk.
	I0229 01:39:29.520488  152990 config.go:182] Loaded profile config "kubernetes-upgrade-011190": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:39:29.520577  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.520601  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.520609  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.520630  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.520716  152990 kapi.go:59] client config for kubernetes-upgrade-011190: &rest.Config{Host:"https://192.168.61.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key", CAFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:39:29.520838  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.520864  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.526136  152990 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-011190" context rescaled to 1 replicas
	I0229 01:39:29.526173  152990 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.22 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:39:29.527921  152990 out.go:177] * Verifying Kubernetes components...
	I0229 01:39:29.529105  152990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:39:29.537443  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34085
	I0229 01:39:29.537516  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0229 01:39:29.537908  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.538034  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.538394  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36869
	I0229 01:39:29.538557  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.538566  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.538583  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.538569  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.538891  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.538983  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.539020  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.539374  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetState
	I0229 01:39:29.539449  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.539487  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.539526  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.539542  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.539953  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.540076  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetState
	I0229 01:39:29.543183  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.543228  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.548708  152990 kapi.go:59] client config for kubernetes-upgrade-011190: &rest.Config{Host:"https://192.168.61.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.crt", KeyFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubernetes-upgrade-011190/client.key", CAFile:"/home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 01:39:29.549024  152990 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-011190"
	W0229 01:39:29.549038  152990 addons.go:243] addon default-storageclass should already be in state true
	I0229 01:39:29.549066  152990 host.go:66] Checking if "kubernetes-upgrade-011190" exists ...
	I0229 01:39:29.549449  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.549484  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.563254  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0229 01:39:29.563650  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0229 01:39:29.563868  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.564390  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.564410  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.564826  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.564999  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:39:29.565177  152990 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:39:29.565198  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:39:29.565640  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.566365  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.566449  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.567112  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.567552  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetState
	I0229 01:39:29.568947  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.569620  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:39:29.569645  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.569691  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:39:29.569961  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:39:29.570125  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:39:29.570393  152990 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:39:29.570722  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:39:29.575937  152990 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:39:29.576049  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0229 01:39:29.577275  152990 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:39:29.577387  152990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:39:29.577407  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:39:29.578293  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.578905  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.578931  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.579621  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.580398  152990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:39:29.580442  152990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:39:29.580666  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.581343  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:39:29.581361  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:39:29.581369  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.581626  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:39:29.581810  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:39:29.581998  152990 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:39:29.601756  152990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I0229 01:39:29.602130  152990 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:39:29.602708  152990 main.go:141] libmachine: Using API Version  1
	I0229 01:39:29.602730  152990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:39:29.603113  152990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:39:29.603329  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetState
	I0229 01:39:29.605158  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .DriverName
	I0229 01:39:29.605441  152990 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:39:29.605469  152990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:39:29.605487  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHHostname
	I0229 01:39:29.608590  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.608918  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:7c:36", ip: ""} in network mk-kubernetes-upgrade-011190: {Iface:virbr3 ExpiryTime:2024-02-29 02:33:31 +0000 UTC Type:0 Mac:52:54:00:c5:7c:36 Iaid: IPaddr:192.168.61.22 Prefix:24 Hostname:kubernetes-upgrade-011190 Clientid:01:52:54:00:c5:7c:36}
	I0229 01:39:29.608949  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | domain kubernetes-upgrade-011190 has defined IP address 192.168.61.22 and MAC address 52:54:00:c5:7c:36 in network mk-kubernetes-upgrade-011190
	I0229 01:39:29.609179  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHPort
	I0229 01:39:29.609353  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHKeyPath
	I0229 01:39:29.609506  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .GetSSHUsername
	I0229 01:39:29.609642  152990 sshutil.go:53] new ssh client: &{IP:192.168.61.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/kubernetes-upgrade-011190/id_rsa Username:docker}
	I0229 01:39:29.682986  152990 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:39:29.683073  152990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:39:29.683253  152990 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 01:39:29.733339  152990 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:39:29.733368  152990 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:39:29.733378  152990 cache_images.go:262] succeeded pushing to: kubernetes-upgrade-011190
	I0229 01:39:29.733383  152990 cache_images.go:263] failed pushing to: 
	I0229 01:39:29.733417  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:29.733431  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:29.733795  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:29.733815  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:29.733824  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:29.733827  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Closing plugin on server side
	I0229 01:39:29.733833  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:29.734083  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:29.734094  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:29.755607  152990 api_server.go:72] duration metric: took 229.403889ms to wait for apiserver process to appear ...
	I0229 01:39:29.755637  152990 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:39:29.755666  152990 api_server.go:253] Checking apiserver healthz at https://192.168.61.22:8443/healthz ...
	I0229 01:39:29.760297  152990 api_server.go:279] https://192.168.61.22:8443/healthz returned 200:
	ok
	I0229 01:39:29.762348  152990 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:39:29.762377  152990 api_server.go:131] duration metric: took 6.732615ms to wait for apiserver health ...
	I0229 01:39:29.762388  152990 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:39:29.775576  152990 system_pods.go:59] 7 kube-system pods found
	I0229 01:39:29.775608  152990 system_pods.go:61] "coredns-76f75df574-hz9nh" [3e55e508-027a-4076-990f-cfcbeaa8a090] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:39:29.775615  152990 system_pods.go:61] "etcd-kubernetes-upgrade-011190" [bafa312d-e797-4160-9ecd-cc07bbbc06a7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:39:29.775624  152990 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-011190" [a6cd2404-1853-4696-bb09-352e7ea04a10] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:39:29.775633  152990 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-011190" [b425428e-ec6a-477c-9af3-da7b98cee8eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:39:29.775637  152990 system_pods.go:61] "kube-proxy-wfkqm" [59b844bf-45f1-4218-a6be-da656f7bffa3] Running
	I0229 01:39:29.775642  152990 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-011190" [89b56d29-0226-4180-aa3b-23736ce46095] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:39:29.775647  152990 system_pods.go:61] "storage-provisioner" [721146bb-572f-434a-94eb-d072939c7086] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:39:29.775654  152990 system_pods.go:74] duration metric: took 13.258681ms to wait for pod list to return data ...
	I0229 01:39:29.775671  152990 kubeadm.go:581] duration metric: took 249.474903ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 01:39:29.775683  152990 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:39:29.776325  152990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:39:29.781772  152990 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:39:29.781818  152990 node_conditions.go:123] node cpu capacity is 2
	I0229 01:39:29.781831  152990 node_conditions.go:105] duration metric: took 6.143343ms to run NodePressure ...
	I0229 01:39:29.781847  152990 start.go:228] waiting for startup goroutines ...
	I0229 01:39:29.808118  152990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:39:30.670434  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:30.670466  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:30.670447  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:30.670532  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:30.670763  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Closing plugin on server side
	I0229 01:39:30.670804  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:30.670815  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Closing plugin on server side
	I0229 01:39:30.670825  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:30.670838  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:30.670848  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:30.670848  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:30.670857  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:30.670865  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:30.670872  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:30.672923  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Closing plugin on server side
	I0229 01:39:30.672953  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) DBG | Closing plugin on server side
	I0229 01:39:30.672959  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:30.672976  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:30.673023  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:30.673046  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:30.679798  152990 main.go:141] libmachine: Making call to close driver server
	I0229 01:39:30.679818  152990 main.go:141] libmachine: (kubernetes-upgrade-011190) Calling .Close
	I0229 01:39:30.680092  152990 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:39:30.680116  152990 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:39:30.682112  152990 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 01:39:30.683456  152990 addons.go:505] enable addons completed in 1.163431703s: enabled=[storage-provisioner default-storageclass]
	I0229 01:39:30.683493  152990 start.go:233] waiting for cluster config update ...
	I0229 01:39:30.683507  152990 start.go:242] writing updated cluster config ...
	I0229 01:39:30.683726  152990 ssh_runner.go:195] Run: rm -f paused
	I0229 01:39:30.742429  152990 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:39:30.744227  152990 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-011190" cluster and "default" namespace by default
	I0229 01:39:25.938151  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:25.938695  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:25.938717  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:25.938631  154347 retry.go:31] will retry after 783.121132ms: waiting for machine to come up
	I0229 01:39:26.723526  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:26.724146  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:26.724170  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:26.724085  154347 retry.go:31] will retry after 1.473013225s: waiting for machine to come up
	I0229 01:39:28.198279  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:28.198833  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:28.198879  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:28.198783  154347 retry.go:31] will retry after 1.709359805s: waiting for machine to come up
	I0229 01:39:29.910850  154325 main.go:141] libmachine: (calico-579291) DBG | domain calico-579291 has defined MAC address 52:54:00:91:48:13 in network mk-calico-579291
	I0229 01:39:29.911428  154325 main.go:141] libmachine: (calico-579291) DBG | unable to find current IP address of domain calico-579291 in network mk-calico-579291
	I0229 01:39:29.911452  154325 main.go:141] libmachine: (calico-579291) DBG | I0229 01:39:29.911385  154347 retry.go:31] will retry after 1.855429012s: waiting for machine to come up
	
	
	==> Docker <==
	Feb 29 01:39:24 kubernetes-upgrade-011190 cri-dockerd[3311]: time="2024-02-29T01:39:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/70e739b33d39d09dc55d915cde34d3248a92625eb717bfdc21986b608e493fd1/resolv.conf as [nameserver 192.168.122.1]"
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.202882321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.206631507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.206837450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.207346667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.302859363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.303028382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.303051335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.303855479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.335818140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.336121531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.336148440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.349548222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.402615836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.402684794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.402698117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.402806137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.416173723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.416237755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.416251909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.416334465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.598022471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.598405890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.598545582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 01:39:24 kubernetes-upgrade-011190 dockerd[3103]: time="2024-02-29T01:39:24.598876856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	37c7e7701b293       cbb01a7bd410d       7 seconds ago       Running             coredns                   1                   70e739b33d39d       coredns-76f75df574-hz9nh
	0d47ae67a5137       4270645ed6b7a       7 seconds ago       Running             kube-scheduler            1                   74d2ae5c67ee6       kube-scheduler-kubernetes-upgrade-011190
	d613b415ad16e       a0eed15eed449       7 seconds ago       Running             etcd                      1                   877395a7a4b47       etcd-kubernetes-upgrade-011190
	c1b43aca839ec       d4e01cdf63970       7 seconds ago       Running             kube-controller-manager   1                   cd59cb54865d0       kube-controller-manager-kubernetes-upgrade-011190
	bfa785bd68ed6       cc0a4f00aad7b       7 seconds ago       Running             kube-proxy                0                   e379023518407       kube-proxy-wfkqm
	a10d26bc8ec79       bbb47a0f83324       8 seconds ago       Running             kube-apiserver            1                   12799a6d84d8c       kube-apiserver-kubernetes-upgrade-011190
	f35d72840b25e       6e38f40d628db       10 seconds ago      Exited              storage-provisioner       1                   3bd688d74296f       storage-provisioner
	6580ad72939d2       cbb01a7bd410d       25 seconds ago      Exited              coredns                   0                   af8bcc322c961       coredns-76f75df574-hz9nh
	d468c9d6902d4       4270645ed6b7a       44 seconds ago      Exited              kube-scheduler            0                   17929264ff5e7       kube-scheduler-kubernetes-upgrade-011190
	846e934df6e53       bbb47a0f83324       44 seconds ago      Exited              kube-apiserver            0                   b79ba2bbae359       kube-apiserver-kubernetes-upgrade-011190
	2f2b65c0d525e       d4e01cdf63970       44 seconds ago      Exited              kube-controller-manager   0                   5421186fa6fd7       kube-controller-manager-kubernetes-upgrade-011190
	a03e0d424a2c4       a0eed15eed449       45 seconds ago      Exited              etcd                      0                   3ad25add7c0cb       etcd-kubernetes-upgrade-011190
	
	
	==> coredns [37c7e7701b29] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36110 - 16125 "HINFO IN 8845323409798622888.5985017055600654907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035377587s
	
	
	==> coredns [6580ad72939d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:41186 - 11373 "HINFO IN 6854411093084843172.9037017539142521613. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031373434s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-011190
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-011190
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 01:38:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-011190
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 01:39:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 01:38:50 +0000   Thu, 29 Feb 2024 01:38:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 01:38:50 +0000   Thu, 29 Feb 2024 01:38:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 01:38:50 +0000   Thu, 29 Feb 2024 01:38:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 01:38:50 +0000   Thu, 29 Feb 2024 01:38:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.22
	  Hostname:    kubernetes-upgrade-011190
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d75cebbc94840febd9139482d1b4eb8
	  System UUID:                9d75cebb-c948-40fe-bd91-39482d1b4eb8
	  Boot ID:                    554c4364-9f96-425e-bfc9-0aedea864d7f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-hz9nh                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26s
	  kube-system                 etcd-kubernetes-upgrade-011190                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-011190             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-011190    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kube-proxy-wfkqm                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-scheduler-kubernetes-upgrade-011190             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node kubernetes-upgrade-011190 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node kubernetes-upgrade-011190 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node kubernetes-upgrade-011190 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node kubernetes-upgrade-011190 event: Registered Node kubernetes-upgrade-011190 in Controller
	
	
	==> dmesg <==
	[  +0.066514] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062714] systemd-fstab-generator[486]: Ignoring "noauto" option for root device
	[  +1.139912] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.329817] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.148321] systemd-fstab-generator[823]: Ignoring "noauto" option for root device
	[  +0.151874] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +1.561720] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.143800] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +0.131922] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.157360] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +4.002504] systemd-fstab-generator[1205]: Ignoring "noauto" option for root device
	[  +0.068862] kauditd_printk_skb: 348 callbacks suppressed
	[  +5.728638] systemd-fstab-generator[1506]: Ignoring "noauto" option for root device
	[  +0.097808] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.680341] kauditd_printk_skb: 68 callbacks suppressed
	[Feb29 01:39] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[  +0.414220] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.195674] systemd-fstab-generator[2718]: Ignoring "noauto" option for root device
	[  +0.188011] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +9.281103] kauditd_printk_skb: 106 callbacks suppressed
	[  +2.762208] systemd-fstab-generator[3260]: Ignoring "noauto" option for root device
	[  +0.151866] systemd-fstab-generator[3272]: Ignoring "noauto" option for root device
	[  +0.136637] systemd-fstab-generator[3284]: Ignoring "noauto" option for root device
	[  +0.174713] systemd-fstab-generator[3299]: Ignoring "noauto" option for root device
	[  +3.250775] kauditd_printk_skb: 132 callbacks suppressed
	
	
	==> etcd [a03e0d424a2c] <==
	{"level":"info","ts":"2024-02-29T01:39:05.734086Z","caller":"traceutil/trace.go:171","msg":"trace[556773429] range","detail":"{range_begin:/registry/leases/kube-node-lease/kubernetes-upgrade-011190; range_end:; response_count:1; response_revision:315; }","duration":"234.291654ms","start":"2024-02-29T01:39:05.499791Z","end":"2024-02-29T01:39:05.734082Z","steps":["trace[556773429] 'agreement among raft nodes before linearized reading'  (duration: 234.270602ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:05.734146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.676627ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-kubernetes-upgrade-011190\" ","response":"range_response_count:1 size:6406"}
	{"level":"info","ts":"2024-02-29T01:39:05.734157Z","caller":"traceutil/trace.go:171","msg":"trace[2023445423] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-kubernetes-upgrade-011190; range_end:; response_count:1; response_revision:315; }","duration":"234.687181ms","start":"2024-02-29T01:39:05.499466Z","end":"2024-02-29T01:39:05.734153Z","steps":["trace[2023445423] 'agreement among raft nodes before linearized reading'  (duration: 234.66385ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:05.734217Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.781568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-011190\" ","response":"range_response_count:1 size:6807"}
	{"level":"info","ts":"2024-02-29T01:39:05.734228Z","caller":"traceutil/trace.go:171","msg":"trace[1064441108] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-011190; range_end:; response_count:1; response_revision:315; }","duration":"234.792223ms","start":"2024-02-29T01:39:05.499432Z","end":"2024-02-29T01:39:05.734224Z","steps":["trace[1064441108] 'agreement among raft nodes before linearized reading'  (duration: 234.76851ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:05.734318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.99275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-011190\" ","response":"range_response_count:1 size:4570"}
	{"level":"info","ts":"2024-02-29T01:39:05.734331Z","caller":"traceutil/trace.go:171","msg":"trace[1740039724] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-kubernetes-upgrade-011190; range_end:; response_count:1; response_revision:315; }","duration":"235.006069ms","start":"2024-02-29T01:39:05.499321Z","end":"2024-02-29T01:39:05.734327Z","steps":["trace[1740039724] 'agreement among raft nodes before linearized reading'  (duration: 234.978519ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:05.734383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"235.143637ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-kubernetes-upgrade-011190\" ","response":"range_response_count:1 size:5493"}
	{"level":"info","ts":"2024-02-29T01:39:05.734393Z","caller":"traceutil/trace.go:171","msg":"trace[1994452657] range","detail":"{range_begin:/registry/pods/kube-system/etcd-kubernetes-upgrade-011190; range_end:; response_count:1; response_revision:315; }","duration":"235.155251ms","start":"2024-02-29T01:39:05.499235Z","end":"2024-02-29T01:39:05.73439Z","steps":["trace[1994452657] 'agreement among raft nodes before linearized reading'  (duration: 235.135345ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T01:39:07.57135Z","caller":"traceutil/trace.go:171","msg":"trace[280566271] linearizableReadLoop","detail":"{readStateIndex:364; appliedIndex:363; }","duration":"354.563568ms","start":"2024-02-29T01:39:07.216762Z","end":"2024-02-29T01:39:07.571325Z","steps":["trace[280566271] 'read index received'  (duration: 354.421002ms)","trace[280566271] 'applied index is now lower than readState.Index'  (duration: 141.961µs)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T01:39:07.571477Z","caller":"traceutil/trace.go:171","msg":"trace[1138629786] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"486.28555ms","start":"2024-02-29T01:39:07.085173Z","end":"2024-02-29T01:39:07.571459Z","steps":["trace[1138629786] 'process raft request'  (duration: 486.050909ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:07.571502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.731522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T01:39:07.571529Z","caller":"traceutil/trace.go:171","msg":"trace[294291673] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:355; }","duration":"354.783215ms","start":"2024-02-29T01:39:07.216738Z","end":"2024-02-29T01:39:07.571521Z","steps":["trace[294291673] 'agreement among raft nodes before linearized reading'  (duration: 354.675753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T01:39:07.571547Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T01:39:07.216725Z","time spent":"354.81792ms","remote":"127.0.0.1:58570","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-02-29T01:39:07.571576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-29T01:39:07.085151Z","time spent":"486.371452ms","remote":"127.0.0.1:58642","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":806,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-wfkqm.17b831bd284919ac\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-wfkqm.17b831bd284919ac\" value_size:726 lease:2751573970076149564 >> failure:<>"}
	{"level":"info","ts":"2024-02-29T01:39:08.185305Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T01:39:08.185345Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-011190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.22:2380"],"advertise-client-urls":["https://192.168.61.22:2379"]}
	{"level":"warn","ts":"2024-02-29T01:39:08.185473Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T01:39:08.185588Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T01:39:08.212269Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.22:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T01:39:08.2123Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.22:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T01:39:08.216031Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2294a45630d9262f","current-leader-member-id":"2294a45630d9262f"}
	{"level":"info","ts":"2024-02-29T01:39:08.825227Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.22:2380"}
	{"level":"info","ts":"2024-02-29T01:39:08.82543Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.22:2380"}
	{"level":"info","ts":"2024-02-29T01:39:08.825441Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-011190","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.22:2380"],"advertise-client-urls":["https://192.168.61.22:2379"]}
	
	
	==> etcd [d613b415ad16] <==
	{"level":"info","ts":"2024-02-29T01:39:24.958275Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T01:39:24.958434Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T01:39:24.959105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f switched to configuration voters=(2491797183936407087)"}
	{"level":"info","ts":"2024-02-29T01:39:24.96411Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"70cba120fa331721","local-member-id":"2294a45630d9262f","added-peer-id":"2294a45630d9262f","added-peer-peer-urls":["https://192.168.61.22:2380"]}
	{"level":"info","ts":"2024-02-29T01:39:24.964651Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"70cba120fa331721","local-member-id":"2294a45630d9262f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T01:39:24.966002Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T01:39:24.977801Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T01:39:24.980133Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.22:2380"}
	{"level":"info","ts":"2024-02-29T01:39:24.98018Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.22:2380"}
	{"level":"info","ts":"2024-02-29T01:39:24.981105Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2294a45630d9262f","initial-advertise-peer-urls":["https://192.168.61.22:2380"],"listen-peer-urls":["https://192.168.61.22:2380"],"advertise-client-urls":["https://192.168.61.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T01:39:24.981163Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T01:39:26.123958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T01:39:26.124017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T01:39:26.124056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f received MsgPreVoteResp from 2294a45630d9262f at term 2"}
	{"level":"info","ts":"2024-02-29T01:39:26.124082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T01:39:26.12409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f received MsgVoteResp from 2294a45630d9262f at term 3"}
	{"level":"info","ts":"2024-02-29T01:39:26.124286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2294a45630d9262f became leader at term 3"}
	{"level":"info","ts":"2024-02-29T01:39:26.124328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2294a45630d9262f elected leader 2294a45630d9262f at term 3"}
	{"level":"info","ts":"2024-02-29T01:39:26.131072Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2294a45630d9262f","local-member-attributes":"{Name:kubernetes-upgrade-011190 ClientURLs:[https://192.168.61.22:2379]}","request-path":"/0/members/2294a45630d9262f/attributes","cluster-id":"70cba120fa331721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T01:39:26.131422Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T01:39:26.131632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T01:39:26.135101Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T01:39:26.135278Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T01:39:26.136705Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T01:39:26.144384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.22:2379"}
	
	
	==> kernel <==
	 01:39:31 up 1 min,  0 users,  load average: 1.47, 0.44, 0.15
	Linux kubernetes-upgrade-011190 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [846e934df6e5] <==
	W0229 01:39:17.540676       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.554591       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.563006       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.631267       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.688204       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.701659       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.710724       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.738742       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.763524       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.805217       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.806677       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.847774       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.853764       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.877188       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.909805       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.954991       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.983188       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:17.991336       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.025270       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.086005       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.112466       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.137579       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.141273       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.171744       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 01:39:18.213302       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a10d26bc8ec7] <==
	I0229 01:39:27.742848       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0229 01:39:27.742857       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0229 01:39:27.742921       1 controller.go:133] Starting OpenAPI controller
	I0229 01:39:27.742970       1 controller.go:85] Starting OpenAPI V3 controller
	I0229 01:39:27.742982       1 naming_controller.go:291] Starting NamingConditionController
	I0229 01:39:27.742994       1 establishing_controller.go:76] Starting EstablishingController
	I0229 01:39:27.743008       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0229 01:39:27.743014       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0229 01:39:27.743025       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 01:39:27.863045       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 01:39:27.874944       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 01:39:27.875737       1 aggregator.go:165] initial CRD sync complete...
	I0229 01:39:27.876002       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 01:39:27.876174       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 01:39:27.876279       1 cache.go:39] Caches are synced for autoregister controller
	I0229 01:39:27.939513       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 01:39:27.939754       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 01:39:27.940292       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 01:39:27.940666       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 01:39:27.944713       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 01:39:27.944979       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 01:39:27.945073       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0229 01:39:27.976656       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0229 01:39:28.747000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 01:39:30.545060       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2f2b65c0d525] <==
	I0229 01:39:04.949118       1 shared_informer.go:318] Caches are synced for TTL
	I0229 01:39:04.956344       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0229 01:39:04.959559       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0229 01:39:04.963437       1 shared_informer.go:318] Caches are synced for expand
	I0229 01:39:04.969542       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0229 01:39:04.984088       1 shared_informer.go:318] Caches are synced for ephemeral
	I0229 01:39:04.985373       1 shared_informer.go:318] Caches are synced for endpoint
	I0229 01:39:04.988352       1 shared_informer.go:318] Caches are synced for job
	I0229 01:39:04.988393       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 01:39:04.991388       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 01:39:05.017989       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 01:39:05.018179       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 01:39:05.063026       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 01:39:05.089004       1 shared_informer.go:318] Caches are synced for crt configmap
	I0229 01:39:05.143337       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 01:39:05.486343       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 01:39:05.493765       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 01:39:05.493840       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 01:39:05.803352       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 1"
	I0229 01:39:05.854935       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wfkqm"
	I0229 01:39:05.925608       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-hz9nh"
	I0229 01:39:05.973376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="175.402457ms"
	I0229 01:39:06.006138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="32.683702ms"
	I0229 01:39:06.011099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="219.413µs"
	I0229 01:39:06.043839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="82.868µs"
	
	
	==> kube-controller-manager [c1b43aca839e] <==
	I0229 01:39:30.222127       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0229 01:39:30.222360       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0229 01:39:30.222603       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0229 01:39:30.222752       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0229 01:39:30.222858       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0229 01:39:30.222949       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0229 01:39:30.223364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0229 01:39:30.223535       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0229 01:39:30.223753       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0229 01:39:30.223924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0229 01:39:30.224195       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0229 01:39:30.224330       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0229 01:39:30.224478       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0229 01:39:30.224710       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0229 01:39:30.224933       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	W0229 01:39:30.225127       1 shared_informer.go:591] resyncPeriod 12h40m48.881002521s is smaller than resyncCheckPeriod 17h58m42.725284872s and the informer has already started. Changing it to 17h58m42.725284872s
	I0229 01:39:30.225366       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0229 01:39:30.225559       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0229 01:39:30.225694       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0229 01:39:30.225765       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0229 01:39:30.225872       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0229 01:39:30.225999       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0229 01:39:30.253474       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0229 01:39:30.253666       1 ttl_controller.go:124] "Starting TTL controller"
	I0229 01:39:30.253736       1 shared_informer.go:311] Waiting for caches to sync for TTL
	
	
	==> kube-proxy [bfa785bd68ed] <==
	I0229 01:39:26.477333       1 server_others.go:72] "Using iptables proxy"
	I0229 01:39:27.893146       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.22"]
	I0229 01:39:28.007100       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 01:39:28.007149       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 01:39:28.007164       1 server_others.go:168] "Using iptables Proxier"
	I0229 01:39:28.011808       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 01:39:28.015200       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 01:39:28.015239       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:39:28.017515       1 config.go:188] "Starting service config controller"
	I0229 01:39:28.018410       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 01:39:28.018655       1 config.go:97] "Starting endpoint slice config controller"
	I0229 01:39:28.018778       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 01:39:28.020522       1 config.go:315] "Starting node config controller"
	I0229 01:39:28.020646       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 01:39:28.120003       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 01:39:28.120366       1 shared_informer.go:318] Caches are synced for service config
	I0229 01:39:28.121363       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0d47ae67a513] <==
	I0229 01:39:25.990739       1 serving.go:380] Generated self-signed cert in-memory
	W0229 01:39:27.783388       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 01:39:27.783641       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 01:39:27.783721       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 01:39:27.783746       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 01:39:27.854135       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 01:39:27.854182       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 01:39:27.858604       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 01:39:27.859129       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 01:39:27.859334       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 01:39:27.859495       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 01:39:27.960162       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d468c9d6902d] <==
	W0229 01:38:51.038691       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 01:38:51.038766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 01:38:51.038705       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 01:38:51.039339       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 01:38:51.065600       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 01:38:51.065658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 01:38:51.115277       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 01:38:51.118212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 01:38:51.123052       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 01:38:51.123149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 01:38:51.162574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 01:38:51.162787       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 01:38:51.209638       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 01:38:51.209879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 01:38:51.352404       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 01:38:51.352786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 01:38:51.396867       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 01:38:51.397099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 01:38:51.421839       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 01:38:51.422106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0229 01:38:54.206689       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 01:39:08.163247       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 01:39:08.163803       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 01:39:08.164073       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 01:39:08.164787       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.417332    1513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8edcf337fceaacc7b9e28d1a71f2b0431696b079c22658dea672c7391c970fae"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.417374    1513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79ba2bbae35999ff4992a0d1f6abbcf2edccbd4b1d37f3671a4be679389f095"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.417404    1513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad25add7c0cbd6be26c25cc3e377e92fc62ee920350e08c5fdea6a4188a715c"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.417490    1513 scope.go:117] "RemoveContainer" containerID="251fab4be8f52d131e91a105e0aaa6d99feee1d39d5fb21ec8c283919d2d753a"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.418417    1513 scope.go:117] "RemoveContainer" containerID="f35d72840b25e7a1586228dc5b3b36b3b77c9c65381973538ece005a0ac5dfbb"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:23.418665    1513 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(721146bb-572f-434a-94eb-d072939c7086)\"" pod="kube-system/storage-provisioner" podUID="721146bb-572f-434a-94eb-d072939c7086"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.425572    1513 status_manager.go:853] "Failed to get status for pod" podUID="627d8303f6f88c090009353373704609" pod="kube-system/kube-apiserver-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.427425    1513 status_manager.go:853] "Failed to get status for pod" podUID="721146bb-572f-434a-94eb-d072939c7086" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.428494    1513 status_manager.go:853] "Failed to get status for pod" podUID="3e55e508-027a-4076-990f-cfcbeaa8a090" pod="kube-system/coredns-76f75df574-hz9nh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hz9nh\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.431410    1513 status_manager.go:853] "Failed to get status for pod" podUID="841f5b3db3772b8abd81f0da382f014a" pod="kube-system/kube-controller-manager-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.433089    1513 status_manager.go:853] "Failed to get status for pod" podUID="3e24c5073aa44f030d02444f4a7b2207" pod="kube-system/kube-scheduler-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.443210    1513 status_manager.go:853] "Failed to get status for pod" podUID="02eac3ea0043ac2d5fee49ee2d17daca" pod="kube-system/etcd-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.444261    1513 status_manager.go:853] "Failed to get status for pod" podUID="721146bb-572f-434a-94eb-d072939c7086" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.446826    1513 status_manager.go:853] "Failed to get status for pod" podUID="3e55e508-027a-4076-990f-cfcbeaa8a090" pod="kube-system/coredns-76f75df574-hz9nh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-hz9nh\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.450640    1513 status_manager.go:853] "Failed to get status for pod" podUID="841f5b3db3772b8abd81f0da382f014a" pod="kube-system/kube-controller-manager-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.454619    1513 status_manager.go:853] "Failed to get status for pod" podUID="3e24c5073aa44f030d02444f4a7b2207" pod="kube-system/kube-scheduler-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.459672    1513 status_manager.go:853] "Failed to get status for pod" podUID="02eac3ea0043ac2d5fee49ee2d17daca" pod="kube-system/etcd-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:23 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:23.467102    1513 status_manager.go:853] "Failed to get status for pod" podUID="627d8303f6f88c090009353373704609" pod="kube-system/kube-apiserver-kubernetes-upgrade-011190" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-011190\": dial tcp 192.168.61.22:8443: connect: connection refused"
	Feb 29 01:39:24 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:24.466816    1513 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-011190?timeout=10s\": dial tcp 192.168.61.22:8443: connect: connection refused" interval="6.4s"
	Feb 29 01:39:24 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:24.892745    1513 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251fab4be8f52d131e91a105e0aaa6d99feee1d39d5fb21ec8c283919d2d753a"
	Feb 29 01:39:24 kubernetes-upgrade-011190 kubelet[1513]: I0229 01:39:24.893085    1513 scope.go:117] "RemoveContainer" containerID="f35d72840b25e7a1586228dc5b3b36b3b77c9c65381973538ece005a0ac5dfbb"
	Feb 29 01:39:24 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:24.893302    1513 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(721146bb-572f-434a-94eb-d072939c7086)\"" pod="kube-system/storage-provisioner" podUID="721146bb-572f-434a-94eb-d072939c7086"
	Feb 29 01:39:27 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:27.802429    1513 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 01:39:27 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:27.802555    1513 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 01:39:27 kubernetes-upgrade-011190 kubelet[1513]: E0229 01:39:27.802612    1513 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	
	==> storage-provisioner [f35d72840b25] <==
	I0229 01:39:21.694180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 01:39:21.697492       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-011190 -n kubernetes-upgrade-011190
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-011190 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-011190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-011190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-011190: (1.212928567s)
--- FAIL: TestKubernetesUpgrade (397.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: exit status 109 (4m43.205729249s)

                                                
                                                
-- stdout --
	* [old-k8s-version-096771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-096771 in cluster old-k8s-version-096771
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:43:27.652494  163581 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:43:27.652644  163581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:43:27.652657  163581 out.go:304] Setting ErrFile to fd 2...
	I0229 01:43:27.652664  163581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:43:27.652973  163581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:43:27.654161  163581 out.go:298] Setting JSON to false
	I0229 01:43:27.655880  163581 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5159,"bootTime":1709165849,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:43:27.656063  163581 start.go:139] virtualization: kvm guest
	I0229 01:43:27.658256  163581 out.go:177] * [old-k8s-version-096771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:43:27.659651  163581 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:43:27.659654  163581 notify.go:220] Checking for updates...
	I0229 01:43:27.660996  163581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:43:27.662357  163581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:43:27.663672  163581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:43:27.664988  163581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:43:27.666277  163581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:43:27.668129  163581 config.go:182] Loaded profile config "bridge-579291": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:43:27.668262  163581 config.go:182] Loaded profile config "flannel-579291": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:43:27.668399  163581 config.go:182] Loaded profile config "kubenet-579291": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:43:27.668522  163581 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:43:27.722210  163581 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 01:43:27.723494  163581 start.go:299] selected driver: kvm2
	I0229 01:43:27.723522  163581 start.go:903] validating driver "kvm2" against <nil>
	I0229 01:43:27.723538  163581 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:43:27.724687  163581 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:43:27.724773  163581 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:43:27.746336  163581 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:43:27.746433  163581 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 01:43:27.746767  163581 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:43:27.746876  163581 cni.go:84] Creating CNI manager for ""
	I0229 01:43:27.746916  163581 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:43:27.746931  163581 start_flags.go:323] config:
	{Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:43:27.747143  163581 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:43:27.749949  163581 out.go:177] * Starting control plane node old-k8s-version-096771 in cluster old-k8s-version-096771
	I0229 01:43:27.751383  163581 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:43:27.751429  163581 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 01:43:27.751447  163581 cache.go:56] Caching tarball of preloaded images
	I0229 01:43:27.751567  163581 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:43:27.751581  163581 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 01:43:27.751714  163581 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/config.json ...
	I0229 01:43:27.751741  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/config.json: {Name:mk72a390350674ee683fc8e6e28419dbf3f25f55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:43:27.751951  163581 start.go:365] acquiring machines lock for old-k8s-version-096771: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:43:35.178958  163581 start.go:369] acquired machines lock for "old-k8s-version-096771" in 7.426927536s
	I0229 01:43:35.179018  163581 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:43:35.179142  163581 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 01:43:35.181596  163581 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 01:43:35.181863  163581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:43:35.181914  163581 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:43:35.199326  163581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0229 01:43:35.199945  163581 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:43:35.200612  163581 main.go:141] libmachine: Using API Version  1
	I0229 01:43:35.200642  163581 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:43:35.200975  163581 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:43:35.201194  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:43:35.201431  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:43:35.201608  163581 start.go:159] libmachine.API.Create for "old-k8s-version-096771" (driver="kvm2")
	I0229 01:43:35.201645  163581 client.go:168] LocalClient.Create starting
	I0229 01:43:35.201686  163581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem
	I0229 01:43:35.201735  163581 main.go:141] libmachine: Decoding PEM data...
	I0229 01:43:35.201761  163581 main.go:141] libmachine: Parsing certificate...
	I0229 01:43:35.201851  163581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem
	I0229 01:43:35.201889  163581 main.go:141] libmachine: Decoding PEM data...
	I0229 01:43:35.201912  163581 main.go:141] libmachine: Parsing certificate...
	I0229 01:43:35.201943  163581 main.go:141] libmachine: Running pre-create checks...
	I0229 01:43:35.201965  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .PreCreateCheck
	I0229 01:43:35.202379  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetConfigRaw
	I0229 01:43:35.202812  163581 main.go:141] libmachine: Creating machine...
	I0229 01:43:35.202830  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .Create
	I0229 01:43:35.202984  163581 main.go:141] libmachine: (old-k8s-version-096771) Creating KVM machine...
	I0229 01:43:35.204182  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found existing default KVM network
	I0229 01:43:35.205640  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.205448  163653 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:c2:b8} reservation:<nil>}
	I0229 01:43:35.206915  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.206815  163653 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:52:ff:02} reservation:<nil>}
	I0229 01:43:35.208186  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.208093  163653 network.go:207] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030cb90}
	I0229 01:43:35.214058  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | trying to create private KVM network mk-old-k8s-version-096771 192.168.61.0/24...
	I0229 01:43:35.287492  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | private KVM network mk-old-k8s-version-096771 192.168.61.0/24 created
	I0229 01:43:35.287533  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.287449  163653 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:43:35.287548  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting up store path in /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771 ...
	I0229 01:43:35.287567  163581 main.go:141] libmachine: (old-k8s-version-096771) Building disk image from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 01:43:35.287624  163581 main.go:141] libmachine: (old-k8s-version-096771) Downloading /home/jenkins/minikube-integration/18063-115328/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 01:43:35.545405  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.545277  163653 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa...
	I0229 01:43:35.701834  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.701704  163653 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/old-k8s-version-096771.rawdisk...
	I0229 01:43:35.701864  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Writing magic tar header
	I0229 01:43:35.701893  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Writing SSH key tar header
	I0229 01:43:35.701933  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:35.701897  163653 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771 ...
	I0229 01:43:35.702086  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771
	I0229 01:43:35.702138  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771 (perms=drwx------)
	I0229 01:43:35.702157  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines
	I0229 01:43:35.702176  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:43:35.702191  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328
	I0229 01:43:35.702211  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines (perms=drwxr-xr-x)
	I0229 01:43:35.702225  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 01:43:35.702239  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube (perms=drwxr-xr-x)
	I0229 01:43:35.702254  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328 (perms=drwxrwxr-x)
	I0229 01:43:35.702265  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 01:43:35.702274  163581 main.go:141] libmachine: (old-k8s-version-096771) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 01:43:35.702285  163581 main.go:141] libmachine: (old-k8s-version-096771) Creating domain...
	I0229 01:43:35.702319  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home/jenkins
	I0229 01:43:35.702352  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Checking permissions on dir: /home
	I0229 01:43:35.702368  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Skipping /home - not owner
	I0229 01:43:35.703733  163581 main.go:141] libmachine: (old-k8s-version-096771) define libvirt domain using xml: 
	I0229 01:43:35.703757  163581 main.go:141] libmachine: (old-k8s-version-096771) <domain type='kvm'>
	I0229 01:43:35.703768  163581 main.go:141] libmachine: (old-k8s-version-096771)   <name>old-k8s-version-096771</name>
	I0229 01:43:35.703776  163581 main.go:141] libmachine: (old-k8s-version-096771)   <memory unit='MiB'>2200</memory>
	I0229 01:43:35.703789  163581 main.go:141] libmachine: (old-k8s-version-096771)   <vcpu>2</vcpu>
	I0229 01:43:35.703799  163581 main.go:141] libmachine: (old-k8s-version-096771)   <features>
	I0229 01:43:35.703808  163581 main.go:141] libmachine: (old-k8s-version-096771)     <acpi/>
	I0229 01:43:35.703819  163581 main.go:141] libmachine: (old-k8s-version-096771)     <apic/>
	I0229 01:43:35.703829  163581 main.go:141] libmachine: (old-k8s-version-096771)     <pae/>
	I0229 01:43:35.703923  163581 main.go:141] libmachine: (old-k8s-version-096771)     
	I0229 01:43:35.703944  163581 main.go:141] libmachine: (old-k8s-version-096771)   </features>
	I0229 01:43:35.703956  163581 main.go:141] libmachine: (old-k8s-version-096771)   <cpu mode='host-passthrough'>
	I0229 01:43:35.703967  163581 main.go:141] libmachine: (old-k8s-version-096771)   
	I0229 01:43:35.703977  163581 main.go:141] libmachine: (old-k8s-version-096771)   </cpu>
	I0229 01:43:35.703986  163581 main.go:141] libmachine: (old-k8s-version-096771)   <os>
	I0229 01:43:35.703997  163581 main.go:141] libmachine: (old-k8s-version-096771)     <type>hvm</type>
	I0229 01:43:35.704007  163581 main.go:141] libmachine: (old-k8s-version-096771)     <boot dev='cdrom'/>
	I0229 01:43:35.704030  163581 main.go:141] libmachine: (old-k8s-version-096771)     <boot dev='hd'/>
	I0229 01:43:35.704042  163581 main.go:141] libmachine: (old-k8s-version-096771)     <bootmenu enable='no'/>
	I0229 01:43:35.704058  163581 main.go:141] libmachine: (old-k8s-version-096771)   </os>
	I0229 01:43:35.704074  163581 main.go:141] libmachine: (old-k8s-version-096771)   <devices>
	I0229 01:43:35.704086  163581 main.go:141] libmachine: (old-k8s-version-096771)     <disk type='file' device='cdrom'>
	I0229 01:43:35.704109  163581 main.go:141] libmachine: (old-k8s-version-096771)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/boot2docker.iso'/>
	I0229 01:43:35.704124  163581 main.go:141] libmachine: (old-k8s-version-096771)       <target dev='hdc' bus='scsi'/>
	I0229 01:43:35.704136  163581 main.go:141] libmachine: (old-k8s-version-096771)       <readonly/>
	I0229 01:43:35.704146  163581 main.go:141] libmachine: (old-k8s-version-096771)     </disk>
	I0229 01:43:35.704156  163581 main.go:141] libmachine: (old-k8s-version-096771)     <disk type='file' device='disk'>
	I0229 01:43:35.704169  163581 main.go:141] libmachine: (old-k8s-version-096771)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 01:43:35.704193  163581 main.go:141] libmachine: (old-k8s-version-096771)       <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/old-k8s-version-096771.rawdisk'/>
	I0229 01:43:35.704204  163581 main.go:141] libmachine: (old-k8s-version-096771)       <target dev='hda' bus='virtio'/>
	I0229 01:43:35.704213  163581 main.go:141] libmachine: (old-k8s-version-096771)     </disk>
	I0229 01:43:35.704223  163581 main.go:141] libmachine: (old-k8s-version-096771)     <interface type='network'>
	I0229 01:43:35.704235  163581 main.go:141] libmachine: (old-k8s-version-096771)       <source network='mk-old-k8s-version-096771'/>
	I0229 01:43:35.704244  163581 main.go:141] libmachine: (old-k8s-version-096771)       <model type='virtio'/>
	I0229 01:43:35.704251  163581 main.go:141] libmachine: (old-k8s-version-096771)     </interface>
	I0229 01:43:35.704265  163581 main.go:141] libmachine: (old-k8s-version-096771)     <interface type='network'>
	I0229 01:43:35.704273  163581 main.go:141] libmachine: (old-k8s-version-096771)       <source network='default'/>
	I0229 01:43:35.704278  163581 main.go:141] libmachine: (old-k8s-version-096771)       <model type='virtio'/>
	I0229 01:43:35.704286  163581 main.go:141] libmachine: (old-k8s-version-096771)     </interface>
	I0229 01:43:35.704290  163581 main.go:141] libmachine: (old-k8s-version-096771)     <serial type='pty'>
	I0229 01:43:35.704297  163581 main.go:141] libmachine: (old-k8s-version-096771)       <target port='0'/>
	I0229 01:43:35.704301  163581 main.go:141] libmachine: (old-k8s-version-096771)     </serial>
	I0229 01:43:35.704307  163581 main.go:141] libmachine: (old-k8s-version-096771)     <console type='pty'>
	I0229 01:43:35.704315  163581 main.go:141] libmachine: (old-k8s-version-096771)       <target type='serial' port='0'/>
	I0229 01:43:35.704322  163581 main.go:141] libmachine: (old-k8s-version-096771)     </console>
	I0229 01:43:35.704327  163581 main.go:141] libmachine: (old-k8s-version-096771)     <rng model='virtio'>
	I0229 01:43:35.704333  163581 main.go:141] libmachine: (old-k8s-version-096771)       <backend model='random'>/dev/random</backend>
	I0229 01:43:35.704342  163581 main.go:141] libmachine: (old-k8s-version-096771)     </rng>
	I0229 01:43:35.704347  163581 main.go:141] libmachine: (old-k8s-version-096771)     
	I0229 01:43:35.704353  163581 main.go:141] libmachine: (old-k8s-version-096771)     
	I0229 01:43:35.704358  163581 main.go:141] libmachine: (old-k8s-version-096771)   </devices>
	I0229 01:43:35.704364  163581 main.go:141] libmachine: (old-k8s-version-096771) </domain>
	I0229 01:43:35.704372  163581 main.go:141] libmachine: (old-k8s-version-096771) 
	I0229 01:43:35.708809  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:0d:1d:93 in network default
	I0229 01:43:35.709549  163581 main.go:141] libmachine: (old-k8s-version-096771) Ensuring networks are active...
	I0229 01:43:35.709565  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:35.710327  163581 main.go:141] libmachine: (old-k8s-version-096771) Ensuring network default is active
	I0229 01:43:35.710806  163581 main.go:141] libmachine: (old-k8s-version-096771) Ensuring network mk-old-k8s-version-096771 is active
	I0229 01:43:35.711474  163581 main.go:141] libmachine: (old-k8s-version-096771) Getting domain xml...
	I0229 01:43:35.712238  163581 main.go:141] libmachine: (old-k8s-version-096771) Creating domain...
	I0229 01:43:36.964653  163581 main.go:141] libmachine: (old-k8s-version-096771) Waiting to get IP...
	I0229 01:43:36.965593  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:36.966100  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:36.966151  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:36.966097  163653 retry.go:31] will retry after 293.777357ms: waiting for machine to come up
	I0229 01:43:37.261606  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:37.262248  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:37.262276  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:37.262205  163653 retry.go:31] will retry after 375.802812ms: waiting for machine to come up
	I0229 01:43:37.639743  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:37.640296  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:37.640321  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:37.640239  163653 retry.go:31] will retry after 386.878794ms: waiting for machine to come up
	I0229 01:43:38.028876  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:38.029523  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:38.029558  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:38.029493  163653 retry.go:31] will retry after 405.202763ms: waiting for machine to come up
	I0229 01:43:38.436050  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:38.436609  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:38.436637  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:38.436561  163653 retry.go:31] will retry after 666.257121ms: waiting for machine to come up
	I0229 01:43:39.104315  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:39.104911  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:39.104942  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:39.104848  163653 retry.go:31] will retry after 917.882498ms: waiting for machine to come up
	I0229 01:43:40.024476  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:40.025091  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:40.025126  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:40.025001  163653 retry.go:31] will retry after 1.076191024s: waiting for machine to come up
	I0229 01:43:41.102776  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:41.103423  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:41.103458  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:41.103376  163653 retry.go:31] will retry after 1.096107137s: waiting for machine to come up
	I0229 01:43:42.201765  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:42.202404  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:42.202428  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:42.202352  163653 retry.go:31] will retry after 1.563441628s: waiting for machine to come up
	I0229 01:43:43.766881  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:43.767332  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:43.767354  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:43.767292  163653 retry.go:31] will retry after 2.307600272s: waiting for machine to come up
	I0229 01:43:46.076408  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:46.077036  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:46.077056  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:46.076958  163653 retry.go:31] will retry after 2.28648061s: waiting for machine to come up
	I0229 01:43:48.366549  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:48.367238  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:48.367266  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:48.367162  163653 retry.go:31] will retry after 2.330341516s: waiting for machine to come up
	I0229 01:43:50.699673  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:50.700214  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:50.700246  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:50.700154  163653 retry.go:31] will retry after 3.749727122s: waiting for machine to come up
	I0229 01:43:54.454055  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:54.454773  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:43:54.454805  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:43:54.454727  163653 retry.go:31] will retry after 4.741264871s: waiting for machine to come up
	I0229 01:43:59.199083  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.199739  163581 main.go:141] libmachine: (old-k8s-version-096771) Found IP for machine: 192.168.61.59
	I0229 01:43:59.199765  163581 main.go:141] libmachine: (old-k8s-version-096771) Reserving static IP address...
	I0229 01:43:59.199781  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has current primary IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.200185  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-096771", mac: "52:54:00:82:00:09", ip: "192.168.61.59"} in network mk-old-k8s-version-096771
	I0229 01:43:59.277050  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Getting to WaitForSSH function...
	I0229 01:43:59.277083  163581 main.go:141] libmachine: (old-k8s-version-096771) Reserved static IP address: 192.168.61.59
	I0229 01:43:59.277092  163581 main.go:141] libmachine: (old-k8s-version-096771) Waiting for SSH to be available...
	I0229 01:43:59.280864  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.281525  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:minikube Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.281557  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.281765  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Using SSH client type: external
	I0229 01:43:59.281819  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa (-rw-------)
	I0229 01:43:59.281855  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:43:59.281874  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | About to run SSH command:
	I0229 01:43:59.281893  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | exit 0
	I0229 01:43:59.413796  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | SSH cmd err, output: <nil>: 
	I0229 01:43:59.414122  163581 main.go:141] libmachine: (old-k8s-version-096771) KVM machine creation complete!
	I0229 01:43:59.414457  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetConfigRaw
	I0229 01:43:59.415086  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:43:59.415295  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:43:59.415458  163581 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 01:43:59.415475  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetState
	I0229 01:43:59.417133  163581 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 01:43:59.417156  163581 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 01:43:59.417163  163581 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 01:43:59.417173  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:43:59.420191  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.420606  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.420634  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.420861  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:43:59.421111  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.421316  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.421495  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:43:59.421680  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:59.421931  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:43:59.421948  163581 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 01:43:59.529490  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:43:59.529524  163581 main.go:141] libmachine: Detecting the provisioner...
	I0229 01:43:59.529536  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:43:59.532438  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.532838  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.532863  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.533085  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:43:59.533309  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.533518  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.533703  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:43:59.533921  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:59.534151  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:43:59.534169  163581 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 01:43:59.655041  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 01:43:59.655135  163581 main.go:141] libmachine: found compatible host: buildroot
	I0229 01:43:59.655152  163581 main.go:141] libmachine: Provisioning with buildroot...
	I0229 01:43:59.655178  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:43:59.655475  163581 buildroot.go:166] provisioning hostname "old-k8s-version-096771"
	I0229 01:43:59.655527  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:43:59.655731  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:43:59.658858  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.659335  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.659371  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.659590  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:43:59.659794  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.659990  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.660176  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:43:59.660377  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:59.660605  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:43:59.660625  163581 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-096771 && echo "old-k8s-version-096771" | sudo tee /etc/hostname
	I0229 01:43:59.793550  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-096771
	
	I0229 01:43:59.793585  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:43:59.796877  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.797326  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.797359  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.797602  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:43:59.797860  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.798091  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:43:59.798254  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:43:59.798460  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:43:59.798674  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:43:59.798708  163581 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-096771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-096771/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-096771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:43:59.918864  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:43:59.918897  163581 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:43:59.918945  163581 buildroot.go:174] setting up certificates
	I0229 01:43:59.918958  163581 provision.go:83] configureAuth start
	I0229 01:43:59.918989  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:43:59.919337  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:43:59.922226  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.922585  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.922607  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.922812  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:43:59.925422  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.925736  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:43:59.925762  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:43:59.925928  163581 provision.go:138] copyHostCerts
	I0229 01:43:59.925989  163581 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:43:59.926002  163581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:43:59.926069  163581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:43:59.926209  163581 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:43:59.926218  163581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:43:59.926244  163581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:43:59.926387  163581 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:43:59.926401  163581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:43:59.926432  163581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:43:59.926508  163581 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-096771 san=[192.168.61.59 192.168.61.59 localhost 127.0.0.1 minikube old-k8s-version-096771]
	I0229 01:44:00.029587  163581 provision.go:172] copyRemoteCerts
	I0229 01:44:00.029654  163581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:44:00.029685  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:00.032647  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.033120  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:00.033146  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.033341  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:00.033562  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.033753  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:00.033970  163581 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:44:00.124471  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:44:00.153590  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 01:44:00.179690  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 01:44:00.208458  163581 provision.go:86] duration metric: configureAuth took 289.479171ms
	I0229 01:44:00.208491  163581 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:44:00.208760  163581 config.go:182] Loaded profile config "old-k8s-version-096771": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 01:44:00.208790  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:00.209049  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:00.212070  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.212505  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:00.212534  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.212753  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:00.212971  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.213155  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.213319  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:00.213525  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:00.213763  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:44:00.213808  163581 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:44:00.327475  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:44:00.327502  163581 buildroot.go:70] root file system type: tmpfs
	I0229 01:44:00.327688  163581 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:44:00.327716  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:00.330871  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.331250  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:00.331307  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.331482  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:00.331669  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.331879  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.332013  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:00.332170  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:00.332418  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:44:00.332479  163581 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:44:00.461322  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:44:00.461392  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:00.464589  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.465057  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:00.465089  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:00.465338  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:00.465556  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.465759  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:00.465923  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:00.466077  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:00.466288  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:44:00.466322  163581 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:44:01.613619  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:44:01.613653  163581 main.go:141] libmachine: Checking connection to Docker...
	I0229 01:44:01.613672  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetURL
	I0229 01:44:01.615295  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | Using libvirt version 6000000
	I0229 01:44:01.618219  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.618673  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.618694  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.618881  163581 main.go:141] libmachine: Docker is up and running!
	I0229 01:44:01.618893  163581 main.go:141] libmachine: Reticulating splines...
	I0229 01:44:01.618902  163581 client.go:171] LocalClient.Create took 26.417245077s
	I0229 01:44:01.618932  163581 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-096771" took 26.417324988s
	I0229 01:44:01.618947  163581 start.go:300] post-start starting for "old-k8s-version-096771" (driver="kvm2")
	I0229 01:44:01.618960  163581 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:44:01.618982  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:01.619270  163581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:44:01.619301  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:01.621758  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.622105  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.622145  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.622308  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:01.622571  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:01.622759  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:01.622966  163581 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:44:01.713048  163581 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:44:01.717904  163581 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:44:01.717931  163581 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:44:01.718009  163581 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:44:01.718111  163581 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:44:01.718222  163581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:44:01.729349  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:44:01.762961  163581 start.go:303] post-start completed in 143.995914ms
	I0229 01:44:01.763016  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetConfigRaw
	I0229 01:44:01.763648  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:44:01.766507  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.766927  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.766962  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.767306  163581 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/config.json ...
	I0229 01:44:01.767485  163581 start.go:128] duration metric: createHost completed in 26.588332278s
	I0229 01:44:01.767509  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:01.770020  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.770393  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.770421  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.770594  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:01.770771  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:01.770931  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:01.771121  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:01.771292  163581 main.go:141] libmachine: Using SSH client type: native
	I0229 01:44:01.771506  163581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:44:01.771526  163581 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:44:01.886999  163581 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171041.869148326
	
	I0229 01:44:01.887028  163581 fix.go:206] guest clock: 1709171041.869148326
	I0229 01:44:01.887038  163581 fix.go:219] Guest: 2024-02-29 01:44:01.869148326 +0000 UTC Remote: 2024-02-29 01:44:01.767497513 +0000 UTC m=+34.178557596 (delta=101.650813ms)
	I0229 01:44:01.887066  163581 fix.go:190] guest clock delta is within tolerance: 101.650813ms
	I0229 01:44:01.887074  163581 start.go:83] releasing machines lock for "old-k8s-version-096771", held for 26.70808759s
	I0229 01:44:01.887104  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:01.887403  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:44:01.890709  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.891123  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.891146  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.891391  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:01.892230  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:01.892457  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:44:01.892573  163581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:44:01.892617  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:01.892717  163581 ssh_runner.go:195] Run: cat /version.json
	I0229 01:44:01.892744  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:44:01.895293  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.895397  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.895792  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.895828  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.895863  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:01.895882  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:01.896069  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:01.896177  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:44:01.896279  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:01.896362  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:44:01.896433  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:01.896511  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:44:01.896589  163581 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:44:01.896642  163581 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:44:01.976003  163581 ssh_runner.go:195] Run: systemctl --version
	I0229 01:44:02.002060  163581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:44:02.008370  163581 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:44:02.008435  163581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 01:44:02.020255  163581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 01:44:02.041684  163581 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:44:02.041721  163581 start.go:475] detecting cgroup driver to use...
	I0229 01:44:02.041865  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:44:02.070151  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 01:44:02.090631  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:44:02.114465  163581 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:44:02.114519  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:44:02.127409  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:44:02.140177  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:44:02.153130  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:44:02.164309  163581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:44:02.176839  163581 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:44:02.190780  163581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:44:02.205772  163581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:44:02.220566  163581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:02.378249  163581 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:44:02.407287  163581 start.go:475] detecting cgroup driver to use...
	I0229 01:44:02.407364  163581 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:44:02.425866  163581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:44:02.449553  163581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:44:02.472607  163581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:44:02.487249  163581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:44:02.501394  163581 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:44:02.532641  163581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:44:02.548034  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:44:02.571060  163581 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:44:02.575426  163581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:44:02.584844  163581 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:44:02.603278  163581 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:44:02.750375  163581 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:44:02.893584  163581 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:44:02.893756  163581 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:44:02.912743  163581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:03.053383  163581 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:44:04.492857  163581 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.439428542s)
	I0229 01:44:04.492932  163581 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:44:04.519753  163581 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:44:04.552559  163581 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 01:44:04.552616  163581 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:44:04.555972  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:04.556450  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:43:51 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:44:04.556487  163581 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:44:04.556580  163581 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 01:44:04.561040  163581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:44:04.577445  163581 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:44:04.577537  163581 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:44:04.597161  163581 docker.go:685] Got preloaded images: 
	I0229 01:44:04.597184  163581 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:44:04.597228  163581 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:44:04.612088  163581 ssh_runner.go:195] Run: which lz4
	I0229 01:44:04.618332  163581 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:44:04.624258  163581 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:44:04.624290  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 01:44:06.111682  163581 docker.go:649] Took 1.493382 seconds to copy over tarball
	I0229 01:44:06.111744  163581 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:44:08.499977  163581 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.388201093s)
	I0229 01:44:08.500011  163581 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:44:08.545175  163581 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:44:08.559339  163581 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 01:44:08.579531  163581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:44:08.750649  163581 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:44:12.491537  163581 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.740853018s)
	I0229 01:44:12.491621  163581 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:44:12.510848  163581 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:44:12.510867  163581 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:44:12.510879  163581 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:44:12.512280  163581 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:44:12.512320  163581 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:44:12.512587  163581 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:44:12.512622  163581 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:12.512700  163581 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:44:12.512843  163581 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:44:12.513332  163581 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:44:12.513467  163581 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:44:12.513578  163581 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:44:12.513651  163581 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 01:44:12.515033  163581 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:12.515200  163581 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:44:12.515286  163581 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:44:12.515622  163581 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 01:44:12.515705  163581 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 01:44:12.519442  163581 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 01:44:12.645828  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 01:44:12.654157  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 01:44:12.654499  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:44:12.657422  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:44:12.664729  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:44:12.664993  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:44:12.677038  163581 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 01:44:12.677095  163581 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 01:44:12.677137  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 01:44:12.684326  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 01:44:12.734882  163581 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 01:44:12.734930  163581 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 01:44:12.734973  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 01:44:12.744761  163581 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 01:44:12.744804  163581 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:44:12.744847  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:44:12.744943  163581 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 01:44:12.744966  163581 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:44:12.744997  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:44:12.745105  163581 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 01:44:12.745136  163581 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:44:12.745162  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:44:12.767973  163581 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 01:44:12.768029  163581 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:44:12.768083  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:44:12.792479  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 01:44:12.792489  163581 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 01:44:12.792551  163581 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:44:12.792593  163581 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 01:44:12.823914  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 01:44:12.851322  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 01:44:12.851390  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 01:44:12.851442  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 01:44:12.854296  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 01:44:12.860877  163581 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 01:44:13.089251  163581 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:44:13.113256  163581 cache_images.go:92] LoadImages completed in 602.355284ms
	W0229 01:44:13.113361  163581 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0229 01:44:13.113433  163581 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:44:13.148946  163581 cni.go:84] Creating CNI manager for ""
	I0229 01:44:13.148977  163581 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:44:13.148997  163581 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:44:13.149028  163581 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.59 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-096771 NodeName:old-k8s-version-096771 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:44:13.149226  163581 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-096771"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-096771
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.59:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:44:13.149348  163581 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-096771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:44:13.149426  163581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 01:44:13.162363  163581 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:44:13.162460  163581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:44:13.174951  163581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 01:44:13.194665  163581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:44:13.214134  163581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 01:44:13.237139  163581 ssh_runner.go:195] Run: grep 192.168.61.59	control-plane.minikube.internal$ /etc/hosts
	I0229 01:44:13.241729  163581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:44:13.261085  163581 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771 for IP: 192.168.61.59
	I0229 01:44:13.261124  163581 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.261290  163581 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:44:13.261355  163581 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:44:13.261425  163581 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.key
	I0229 01:44:13.261442  163581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.crt with IP's: []
	I0229 01:44:13.386386  163581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.crt ...
	I0229 01:44:13.386415  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.crt: {Name:mkfc094b891edf3b0f61f75731b8b584ab560496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.386574  163581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.key ...
	I0229 01:44:13.386594  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.key: {Name:mk8ca1ec7f29944a097dc886adb084a1354f8ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.386699  163581 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key.a8f3ad05
	I0229 01:44:13.386722  163581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt.a8f3ad05 with IP's: [192.168.61.59 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 01:44:13.598315  163581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt.a8f3ad05 ...
	I0229 01:44:13.598354  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt.a8f3ad05: {Name:mk7b68f979c9b015d93e869085a8e404be8e3d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.598540  163581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key.a8f3ad05 ...
	I0229 01:44:13.598561  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key.a8f3ad05: {Name:mka63d30d76a7cbb9f16ee47e0a4421ce895db86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.598661  163581 certs.go:337] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt.a8f3ad05 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt
	I0229 01:44:13.598787  163581 certs.go:341] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key.a8f3ad05 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key
	I0229 01:44:13.598891  163581 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key
	I0229 01:44:13.598916  163581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.crt with IP's: []
	I0229 01:44:13.672584  163581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.crt ...
	I0229 01:44:13.672614  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.crt: {Name:mk6ff04638a277f196096d6e2e43dda138592b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.672776  163581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key ...
	I0229 01:44:13.672790  163581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key: {Name:mkff2a1fed5ab026b0e3b03cf49fa81ee6f24afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:44:13.672948  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:44:13.672984  163581 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:44:13.672996  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:44:13.673019  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:44:13.673041  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:44:13.673065  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:44:13.673097  163581 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:44:13.673674  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:44:13.711441  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:44:13.745562  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:44:13.782701  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 01:44:13.833035  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:44:13.864454  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:44:13.899858  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:44:13.930357  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:44:13.966477  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:44:13.995896  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:44:14.030783  163581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:44:14.064208  163581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:44:14.086764  163581 ssh_runner.go:195] Run: openssl version
	I0229 01:44:14.095209  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:44:14.108552  163581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:44:14.113882  163581 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:44:14.113944  163581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:44:14.120372  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:44:14.134199  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:44:14.148118  163581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:44:14.154441  163581 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:44:14.154520  163581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:44:14.161444  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:44:14.175733  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:44:14.189652  163581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:14.195707  163581 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:14.195771  163581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:44:14.202889  163581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:44:14.218494  163581 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:44:14.224755  163581 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 01:44:14.224815  163581 kubeadm.go:404] StartCluster: {Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.59 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:44:14.224968  163581 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:44:14.247340  163581 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:44:14.263767  163581 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:44:14.278944  163581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:44:14.292881  163581 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:44:14.292929  163581 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:44:14.453969  163581 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:44:14.454039  163581 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:44:14.986932  163581 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:44:14.987091  163581 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:44:14.987186  163581 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:44:15.218131  163581 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:44:15.231353  163581 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:44:15.245626  163581 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:44:15.429493  163581 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:44:15.432350  163581 out.go:204]   - Generating certificates and keys ...
	I0229 01:44:15.432460  163581 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:44:15.432535  163581 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:44:16.277455  163581 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 01:44:16.634161  163581 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 01:44:16.817900  163581 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 01:44:16.889397  163581 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 01:44:17.027746  163581 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 01:44:17.027922  163581 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	I0229 01:44:17.166225  163581 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 01:44:17.166400  163581 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	I0229 01:44:17.551685  163581 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 01:44:17.707126  163581 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 01:44:17.852347  163581 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 01:44:17.852668  163581 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:44:17.995111  163581 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:44:18.116467  163581 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:44:18.368210  163581 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:44:18.498309  163581 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:44:18.499248  163581 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:44:18.501134  163581 out.go:204]   - Booting up control plane ...
	I0229 01:44:18.501259  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:44:18.506885  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:44:18.508303  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:44:18.509195  163581 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:44:18.513917  163581 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:44:58.508332  163581 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:44:58.510214  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:44:58.510464  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:45:03.510931  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:45:03.511214  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:45:13.511273  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:45:13.511541  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:45:33.513480  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:45:33.513754  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:13.513014  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:46:13.513476  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:46:13.513520  163581 kubeadm.go:322] 
	I0229 01:46:13.513614  163581 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:46:13.513707  163581 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:46:13.513725  163581 kubeadm.go:322] 
	I0229 01:46:13.513838  163581 kubeadm.go:322] This error is likely caused by:
	I0229 01:46:13.513894  163581 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:46:13.514049  163581 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:46:13.514078  163581 kubeadm.go:322] 
	I0229 01:46:13.514250  163581 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:46:13.514317  163581 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:46:13.514359  163581 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:46:13.514382  163581 kubeadm.go:322] 
	I0229 01:46:13.514535  163581 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:46:13.514685  163581 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:46:13.514795  163581 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:46:13.514881  163581 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:46:13.515005  163581 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:46:13.515057  163581 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:46:13.515513  163581 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:46:13.515726  163581 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:46:13.515882  163581 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:46:13.516017  163581 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:46:13.516161  163581 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 01:46:13.516333  163581 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-096771 localhost] and IPs [192.168.61.59 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:46:13.516392  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:46:13.969474  163581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:46:13.984481  163581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:46:13.994593  163581 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:46:13.994641  163581 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:46:14.142177  163581 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:46:14.186300  163581 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:46:14.274258  163581 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:48:10.382854  163581 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:48:10.382964  163581 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:48:10.384282  163581 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:48:10.384354  163581 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:48:10.384429  163581 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:48:10.384543  163581 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:48:10.384653  163581 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:48:10.384776  163581 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:48:10.384867  163581 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:48:10.384914  163581 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:48:10.384972  163581 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:48:10.386713  163581 out.go:204]   - Generating certificates and keys ...
	I0229 01:48:10.386803  163581 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:48:10.386860  163581 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:48:10.386923  163581 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:48:10.386999  163581 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:48:10.387099  163581 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:48:10.387182  163581 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:48:10.387267  163581 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:48:10.387349  163581 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:48:10.387436  163581 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:48:10.387533  163581 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:48:10.387587  163581 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:48:10.387676  163581 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:48:10.387761  163581 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:48:10.387854  163581 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:48:10.387911  163581 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:48:10.387987  163581 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:48:10.388072  163581 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:48:10.389629  163581 out.go:204]   - Booting up control plane ...
	I0229 01:48:10.389719  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:48:10.389833  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:48:10.389932  163581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:48:10.390030  163581 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:48:10.390173  163581 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:48:10.390216  163581 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:48:10.390287  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:10.390445  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:10.390529  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:10.390711  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:10.390812  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:10.391074  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:10.391141  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:10.391312  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:10.391413  163581 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:48:10.391619  163581 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:48:10.391633  163581 kubeadm.go:322] 
	I0229 01:48:10.391684  163581 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:48:10.391739  163581 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:48:10.391750  163581 kubeadm.go:322] 
	I0229 01:48:10.391783  163581 kubeadm.go:322] This error is likely caused by:
	I0229 01:48:10.391819  163581 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:48:10.391931  163581 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:48:10.391939  163581 kubeadm.go:322] 
	I0229 01:48:10.392063  163581 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:48:10.392111  163581 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:48:10.392155  163581 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:48:10.392163  163581 kubeadm.go:322] 
	I0229 01:48:10.392261  163581 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:48:10.392381  163581 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:48:10.392479  163581 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:48:10.392549  163581 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:48:10.392649  163581 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:48:10.392724  163581 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:48:10.392749  163581 kubeadm.go:406] StartCluster complete in 3m56.167937061s
	I0229 01:48:10.392836  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:48:10.417138  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.417157  163581 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:48:10.417210  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:48:10.434970  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.434993  163581 logs.go:278] No container was found matching "etcd"
	I0229 01:48:10.435051  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:48:10.452551  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.452579  163581 logs.go:278] No container was found matching "coredns"
	I0229 01:48:10.452644  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:48:10.469543  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.469566  163581 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:48:10.469618  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:48:10.485744  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.485787  163581 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:48:10.485857  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:48:10.502871  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.502891  163581 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:48:10.502938  163581 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:48:10.520796  163581 logs.go:276] 0 containers: []
	W0229 01:48:10.520822  163581 logs.go:278] No container was found matching "kindnet"
	I0229 01:48:10.520836  163581 logs.go:123] Gathering logs for dmesg ...
	I0229 01:48:10.520850  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:48:10.535557  163581 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:48:10.535582  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:48:10.606599  163581 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:48:10.606622  163581 logs.go:123] Gathering logs for Docker ...
	I0229 01:48:10.606638  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:48:10.648844  163581 logs.go:123] Gathering logs for container status ...
	I0229 01:48:10.648878  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:48:10.711068  163581 logs.go:123] Gathering logs for kubelet ...
	I0229 01:48:10.711108  163581 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:48:10.775908  163581 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:48:10.775985  163581 out.go:239] * 
	* 
	W0229 01:48:10.776076  163581 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:48:10.776111  163581 out.go:239] * 
	* 
	W0229 01:48:10.776994  163581 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:48:10.780198  163581 out.go:177] 
	W0229 01:48:10.781807  163581 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:48:10.781912  163581 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:48:10.781946  163581 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:48:10.783349  163581 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 6 (253.136608ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:48:11.084087  170256 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-096771" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-096771" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-096771 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-096771 create -f testdata/busybox.yaml: exit status 1 (49.105377ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-096771" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-096771 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 6 (235.271977ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:48:11.369990  170295 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-096771" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-096771" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 6 (248.572943ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:48:11.619636  170324 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-096771" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-096771" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0229 01:48:17.510379  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:48:35.606202  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.611482  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.621797  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.642068  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.682521  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.762888  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:35.923691  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:36.244788  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:36.885027  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:37.991430  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:48:38.165861  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:40.726732  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:45.847421  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:49.176855  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:48:52.961314  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:48:55.106754  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:48:55.671007  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.676276  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.686537  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.706839  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.747154  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.827516  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:55.988577  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:56.087925  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:48:56.309268  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:56.950301  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:48:58.230484  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:49:00.791652  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:49:05.911968  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:49:10.694419  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 01:49:16.152789  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:49:16.568424  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:49:18.951793  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:49:20.646232  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:49:28.084391  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:49:36.632988  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:49:40.913663  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:49:42.886638  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:49:46.680433  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.366027945s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-096771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-096771 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-096771 describe deploy/metrics-server -n kube-system: exit status 1 (51.815664ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-096771" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-096771 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 6 (251.058346ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 01:49:49.287174  170622 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-096771" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-096771" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (519.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0229 01:49:55.517132  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.522501  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.532821  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.553165  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.593489  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.674208  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:55.834625  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:56.155462  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:56.796260  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:49:57.529577  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:49:57.863070  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:49:58.077452  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:50:00.637642  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:50:05.757873  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:50:14.362876  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:50:15.999023  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:50:17.593181  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:50:36.479749  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:50:40.872859  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:51:05.333011  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:51:05.932834  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:51:11.261969  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:51:17.440934  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:51:19.450772  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:51:33.017965  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:51:38.947966  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:51:39.513414  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:51:44.239562  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: exit status 109 (8m38.223794556s)

                                                
                                                
-- stdout --
	* [old-k8s-version-096771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-096771 in cluster old-k8s-version-096771
	* Restarting existing kvm2 VM for "old-k8s-version-096771" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:49:51.748889  170748 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:49:51.749159  170748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:49:51.749171  170748 out.go:304] Setting ErrFile to fd 2...
	I0229 01:49:51.749177  170748 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:49:51.749383  170748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:49:51.749950  170748 out.go:298] Setting JSON to false
	I0229 01:49:51.750940  170748 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5543,"bootTime":1709165849,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:49:51.751005  170748 start.go:139] virtualization: kvm guest
	I0229 01:49:51.753952  170748 out.go:177] * [old-k8s-version-096771] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:49:51.755409  170748 notify.go:220] Checking for updates...
	I0229 01:49:51.755420  170748 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:49:51.756761  170748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:49:51.758021  170748 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:49:51.759256  170748 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:49:51.760459  170748 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:49:51.761606  170748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:49:51.763213  170748 config.go:182] Loaded profile config "old-k8s-version-096771": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 01:49:51.763559  170748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:49:51.763605  170748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:49:51.778990  170748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43835
	I0229 01:49:51.779455  170748 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:49:51.780034  170748 main.go:141] libmachine: Using API Version  1
	I0229 01:49:51.780065  170748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:49:51.780409  170748 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:49:51.780631  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:49:51.782245  170748 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 01:49:51.783531  170748 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:49:51.784131  170748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:49:51.784194  170748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:49:51.801111  170748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0229 01:49:51.801567  170748 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:49:51.802106  170748 main.go:141] libmachine: Using API Version  1
	I0229 01:49:51.802133  170748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:49:51.802555  170748 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:49:51.802808  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:49:51.841125  170748 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:49:51.842449  170748 start.go:299] selected driver: kvm2
	I0229 01:49:51.842462  170748 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.59 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:49:51.842549  170748 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:49:51.843268  170748 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:49:51.843339  170748 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:49:51.858580  170748 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:49:51.858943  170748 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 01:49:51.859004  170748 cni.go:84] Creating CNI manager for ""
	I0229 01:49:51.859017  170748 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:49:51.859022  170748 start_flags.go:323] config:
	{Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.59 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:49:51.859208  170748 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:49:51.860985  170748 out.go:177] * Starting control plane node old-k8s-version-096771 in cluster old-k8s-version-096771
	I0229 01:49:51.862184  170748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:49:51.862221  170748 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 01:49:51.862232  170748 cache.go:56] Caching tarball of preloaded images
	I0229 01:49:51.862305  170748 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:49:51.862319  170748 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 01:49:51.862420  170748 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/config.json ...
	I0229 01:49:51.862626  170748 start.go:365] acquiring machines lock for old-k8s-version-096771: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:49:51.862680  170748 start.go:369] acquired machines lock for "old-k8s-version-096771" in 34.197µs
	I0229 01:49:51.862700  170748 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:49:51.862710  170748 fix.go:54] fixHost starting: 
	I0229 01:49:51.862992  170748 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:49:51.863027  170748 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:49:51.876971  170748 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0229 01:49:51.877507  170748 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:49:51.878125  170748 main.go:141] libmachine: Using API Version  1
	I0229 01:49:51.878151  170748 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:49:51.878504  170748 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:49:51.878688  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:49:51.878842  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetState
	I0229 01:49:51.880644  170748 fix.go:102] recreateIfNeeded on old-k8s-version-096771: state=Stopped err=<nil>
	I0229 01:49:51.880673  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	W0229 01:49:51.880860  170748 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:49:51.882785  170748 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-096771" ...
	I0229 01:49:51.884086  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .Start
	I0229 01:49:51.884269  170748 main.go:141] libmachine: (old-k8s-version-096771) Ensuring networks are active...
	I0229 01:49:51.885036  170748 main.go:141] libmachine: (old-k8s-version-096771) Ensuring network default is active
	I0229 01:49:51.885393  170748 main.go:141] libmachine: (old-k8s-version-096771) Ensuring network mk-old-k8s-version-096771 is active
	I0229 01:49:51.885873  170748 main.go:141] libmachine: (old-k8s-version-096771) Getting domain xml...
	I0229 01:49:51.886722  170748 main.go:141] libmachine: (old-k8s-version-096771) Creating domain...
	I0229 01:49:53.123120  170748 main.go:141] libmachine: (old-k8s-version-096771) Waiting to get IP...
	I0229 01:49:53.124065  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:53.124605  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:53.124629  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:53.124553  170783 retry.go:31] will retry after 301.575827ms: waiting for machine to come up
	I0229 01:49:53.428224  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:53.428818  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:53.428855  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:53.428743  170783 retry.go:31] will retry after 357.294873ms: waiting for machine to come up
	I0229 01:49:53.788164  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:53.788679  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:53.788703  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:53.788625  170783 retry.go:31] will retry after 480.187372ms: waiting for machine to come up
	I0229 01:49:54.270292  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:54.270839  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:54.270895  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:54.270817  170783 retry.go:31] will retry after 555.799809ms: waiting for machine to come up
	I0229 01:49:54.828593  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:54.829301  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:54.829338  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:54.829229  170783 retry.go:31] will retry after 592.867796ms: waiting for machine to come up
	I0229 01:49:55.424238  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:55.424726  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:55.424744  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:55.424696  170783 retry.go:31] will retry after 637.198864ms: waiting for machine to come up
	I0229 01:49:56.063559  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:56.064161  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:56.064188  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:56.064101  170783 retry.go:31] will retry after 949.932106ms: waiting for machine to come up
	I0229 01:49:57.016202  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:57.016783  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:57.016812  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:57.016740  170783 retry.go:31] will retry after 1.120477523s: waiting for machine to come up
	I0229 01:49:58.139111  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:58.139646  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:58.139680  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:58.139591  170783 retry.go:31] will retry after 1.539099593s: waiting for machine to come up
	I0229 01:49:59.681391  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:49:59.682058  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:49:59.682090  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:49:59.681995  170783 retry.go:31] will retry after 1.580642388s: waiting for machine to come up
	I0229 01:50:01.264152  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:01.264685  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:50:01.264715  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:50:01.264635  170783 retry.go:31] will retry after 1.852142s: waiting for machine to come up
	I0229 01:50:03.118152  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:03.118698  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:50:03.118729  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:50:03.118643  170783 retry.go:31] will retry after 3.403464415s: waiting for machine to come up
	I0229 01:50:06.526327  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:06.526963  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | unable to find current IP address of domain old-k8s-version-096771 in network mk-old-k8s-version-096771
	I0229 01:50:06.527004  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | I0229 01:50:06.526911  170783 retry.go:31] will retry after 3.653687764s: waiting for machine to come up
	I0229 01:50:10.183733  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.184399  170748 main.go:141] libmachine: (old-k8s-version-096771) Found IP for machine: 192.168.61.59
	I0229 01:50:10.184425  170748 main.go:141] libmachine: (old-k8s-version-096771) Reserving static IP address...
	I0229 01:50:10.184474  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has current primary IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.184813  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "old-k8s-version-096771", mac: "52:54:00:82:00:09", ip: "192.168.61.59"} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.184838  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | skip adding static IP to network mk-old-k8s-version-096771 - found existing host DHCP lease matching {name: "old-k8s-version-096771", mac: "52:54:00:82:00:09", ip: "192.168.61.59"}
	I0229 01:50:10.184853  170748 main.go:141] libmachine: (old-k8s-version-096771) Reserved static IP address: 192.168.61.59
	I0229 01:50:10.184866  170748 main.go:141] libmachine: (old-k8s-version-096771) Waiting for SSH to be available...
	I0229 01:50:10.184874  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | Getting to WaitForSSH function...
	I0229 01:50:10.187070  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.187396  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.187421  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.187553  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | Using SSH client type: external
	I0229 01:50:10.187584  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa (-rw-------)
	I0229 01:50:10.187618  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.59 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:50:10.187652  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | About to run SSH command:
	I0229 01:50:10.187665  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | exit 0
	I0229 01:50:10.313813  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | SSH cmd err, output: <nil>: 
	I0229 01:50:10.314218  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetConfigRaw
	I0229 01:50:10.314871  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:50:10.317799  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.318246  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.318276  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.318617  170748 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/config.json ...
	I0229 01:50:10.318879  170748 machine.go:88] provisioning docker machine ...
	I0229 01:50:10.318906  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:10.319117  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:50:10.319317  170748 buildroot.go:166] provisioning hostname "old-k8s-version-096771"
	I0229 01:50:10.319337  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:50:10.319504  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:10.322107  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.322500  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.322520  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.322743  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:10.322911  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.323086  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.323253  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:10.323428  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:10.323664  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:10.323679  170748 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-096771 && echo "old-k8s-version-096771" | sudo tee /etc/hostname
	I0229 01:50:10.448309  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-096771
	
	I0229 01:50:10.448346  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:10.451160  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.451462  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.451491  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.451670  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:10.451897  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.452032  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.452207  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:10.452441  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:10.452640  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:10.452663  170748 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-096771' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-096771/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-096771' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:50:10.575756  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:50:10.575809  170748 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:50:10.575847  170748 buildroot.go:174] setting up certificates
	I0229 01:50:10.575859  170748 provision.go:83] configureAuth start
	I0229 01:50:10.575880  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetMachineName
	I0229 01:50:10.576184  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:50:10.578956  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.579363  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.579390  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.579592  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:10.582214  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.582662  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.582685  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.582912  170748 provision.go:138] copyHostCerts
	I0229 01:50:10.582974  170748 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:50:10.582995  170748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:50:10.583086  170748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:50:10.583213  170748 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:50:10.583225  170748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:50:10.583274  170748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:50:10.583348  170748 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:50:10.583359  170748 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:50:10.583398  170748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:50:10.583464  170748 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-096771 san=[192.168.61.59 192.168.61.59 localhost 127.0.0.1 minikube old-k8s-version-096771]
	I0229 01:50:10.810756  170748 provision.go:172] copyRemoteCerts
	I0229 01:50:10.810825  170748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:50:10.810850  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:10.813728  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.814129  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.814158  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.814372  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:10.814601  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.814765  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:10.814915  170748 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:50:10.899863  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:50:10.925278  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 01:50:10.955074  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:50:10.981763  170748 provision.go:86] duration metric: configureAuth took 405.88786ms
	I0229 01:50:10.981817  170748 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:50:10.981981  170748 config.go:182] Loaded profile config "old-k8s-version-096771": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 01:50:10.982007  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:10.982309  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:10.985328  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.985755  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:10.985799  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:10.985993  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:10.986154  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.986365  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:10.986565  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:10.986787  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:10.986991  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:10.987003  170748 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:50:11.103414  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:50:11.103445  170748 buildroot.go:70] root file system type: tmpfs
	I0229 01:50:11.103614  170748 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:50:11.103651  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:11.106542  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:11.106905  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:11.106929  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:11.107125  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:11.107317  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:11.107484  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:11.107609  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:11.107737  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:11.107915  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:11.108013  170748 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:50:11.239778  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:50:11.239809  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:11.242504  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:11.242917  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:11.242950  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:11.243112  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:11.243330  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:11.243504  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:11.243649  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:11.243794  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:11.243998  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:11.244017  170748 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:50:12.050152  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:50:12.050183  170748 machine.go:91] provisioned docker machine in 1.73128611s
	I0229 01:50:12.050198  170748 start.go:300] post-start starting for "old-k8s-version-096771" (driver="kvm2")
	I0229 01:50:12.050210  170748 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:50:12.050225  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:12.050605  170748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:50:12.050649  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:12.053394  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.053707  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:12.053746  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.053938  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:12.054140  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:12.054324  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:12.054520  170748 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:50:12.141086  170748 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:50:12.145731  170748 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:50:12.145761  170748 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:50:12.145845  170748 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:50:12.145933  170748 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:50:12.146047  170748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:50:12.157212  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:50:12.182628  170748 start.go:303] post-start completed in 132.415207ms
	I0229 01:50:12.182673  170748 fix.go:56] fixHost completed within 20.319962391s
	I0229 01:50:12.182700  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:12.186055  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.186414  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:12.186434  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.186672  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:12.186889  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:12.187078  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:12.187251  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:12.187445  170748 main.go:141] libmachine: Using SSH client type: native
	I0229 01:50:12.187597  170748 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.59 22 <nil> <nil>}
	I0229 01:50:12.187608  170748 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 01:50:12.298839  170748 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171412.278591263
	
	I0229 01:50:12.298864  170748 fix.go:206] guest clock: 1709171412.278591263
	I0229 01:50:12.298875  170748 fix.go:219] Guest: 2024-02-29 01:50:12.278591263 +0000 UTC Remote: 2024-02-29 01:50:12.182678785 +0000 UTC m=+20.482427923 (delta=95.912478ms)
	I0229 01:50:12.298901  170748 fix.go:190] guest clock delta is within tolerance: 95.912478ms
	I0229 01:50:12.298913  170748 start.go:83] releasing machines lock for "old-k8s-version-096771", held for 20.436220718s
	I0229 01:50:12.298939  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:12.299206  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:50:12.301832  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.302177  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:12.302207  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.302338  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:12.302853  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:12.303037  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .DriverName
	I0229 01:50:12.303128  170748 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:50:12.303185  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:12.303238  170748 ssh_runner.go:195] Run: cat /version.json
	I0229 01:50:12.303257  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHHostname
	I0229 01:50:12.306014  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.306191  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.306366  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:12.306392  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.306543  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:12.306673  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:12.306700  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:12.306742  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:12.306824  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHPort
	I0229 01:50:12.306922  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:12.306998  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHKeyPath
	I0229 01:50:12.307061  170748 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:50:12.307113  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetSSHUsername
	I0229 01:50:12.307212  170748 sshutil.go:53] new ssh client: &{IP:192.168.61.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/old-k8s-version-096771/id_rsa Username:docker}
	I0229 01:50:12.411201  170748 ssh_runner.go:195] Run: systemctl --version
	I0229 01:50:12.419153  170748 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:50:12.426273  170748 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:50:12.426354  170748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 01:50:12.436640  170748 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 01:50:12.454587  170748 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:50:12.454635  170748 start.go:475] detecting cgroup driver to use...
	I0229 01:50:12.454786  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:50:12.475090  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 01:50:12.487661  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:50:12.499567  170748 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:50:12.499655  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:50:12.510958  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:50:12.522861  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:50:12.533791  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:50:12.544326  170748 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:50:12.557012  170748 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:50:12.569283  170748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:50:12.580613  170748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:50:12.591591  170748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:50:12.727242  170748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:50:12.753502  170748 start.go:475] detecting cgroup driver to use...
	I0229 01:50:12.753617  170748 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:50:12.772307  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:50:12.787823  170748 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:50:12.806736  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:50:12.821344  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:50:12.835637  170748 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:50:12.859916  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:50:12.874537  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:50:12.895366  170748 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:50:12.899391  170748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:50:12.909444  170748 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:50:12.927495  170748 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:50:13.046255  170748 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:50:13.187097  170748 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:50:13.187251  170748 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:50:13.209111  170748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:50:13.328044  170748 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:50:14.705199  170748 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.37711489s)
	I0229 01:50:14.705306  170748 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:50:14.731194  170748 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:50:14.757491  170748 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 01:50:14.757533  170748 main.go:141] libmachine: (old-k8s-version-096771) Calling .GetIP
	I0229 01:50:14.760205  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:14.760538  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:00:09", ip: ""} in network mk-old-k8s-version-096771: {Iface:virbr3 ExpiryTime:2024-02-29 02:50:03 +0000 UTC Type:0 Mac:52:54:00:82:00:09 Iaid: IPaddr:192.168.61.59 Prefix:24 Hostname:old-k8s-version-096771 Clientid:01:52:54:00:82:00:09}
	I0229 01:50:14.760560  170748 main.go:141] libmachine: (old-k8s-version-096771) DBG | domain old-k8s-version-096771 has defined IP address 192.168.61.59 and MAC address 52:54:00:82:00:09 in network mk-old-k8s-version-096771
	I0229 01:50:14.760733  170748 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 01:50:14.764827  170748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:50:14.779402  170748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 01:50:14.779469  170748 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:50:14.800922  170748 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:50:14.800943  170748 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:50:14.800985  170748 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:50:14.812001  170748 ssh_runner.go:195] Run: which lz4
	I0229 01:50:14.816148  170748 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 01:50:14.820555  170748 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 01:50:14.820589  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 01:50:16.268610  170748 docker.go:649] Took 1.452483 seconds to copy over tarball
	I0229 01:50:16.268691  170748 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 01:50:18.439171  170748 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.170445244s)
	I0229 01:50:18.439206  170748 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 01:50:18.476371  170748 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 01:50:18.490029  170748 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 01:50:18.512584  170748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:50:18.657106  170748 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:50:22.262063  170748 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.60491365s)
	I0229 01:50:22.262173  170748 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:50:22.282710  170748 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 01:50:22.282736  170748 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 01:50:22.282748  170748 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 01:50:22.284474  170748 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:50:22.284530  170748 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 01:50:22.284590  170748 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:50:22.284601  170748 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 01:50:22.284605  170748 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:50:22.284530  170748 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:50:22.284531  170748 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:50:22.284824  170748 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:50:22.285743  170748 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:50:22.285760  170748 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 01:50:22.285799  170748 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:50:22.285810  170748 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:50:22.285817  170748 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:50:22.285743  170748 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 01:50:22.285742  170748 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:50:22.285742  170748 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:50:22.414594  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 01:50:22.425280  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:50:22.425995  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 01:50:22.426909  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:50:22.431648  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:50:22.433466  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 01:50:22.434357  170748 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 01:50:22.434404  170748 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 01:50:22.434443  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 01:50:22.442317  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:50:22.508363  170748 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 01:50:22.508402  170748 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 01:50:22.508417  170748 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 01:50:22.508416  170748 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 01:50:22.508437  170748 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:50:22.508437  170748 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:50:22.508469  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 01:50:22.508477  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 01:50:22.508479  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 01:50:22.520210  170748 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 01:50:22.520256  170748 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 01:50:22.520278  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 01:50:22.520278  170748 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 01:50:22.520337  170748 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 01:50:22.520375  170748 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:50:22.520419  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 01:50:22.520339  170748 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:50:22.520489  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 01:50:22.520301  170748 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 01:50:22.577622  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 01:50:22.577691  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 01:50:22.577751  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 01:50:22.589392  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 01:50:22.589422  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 01:50:22.589457  170748 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 01:50:22.881989  170748 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:50:22.901757  170748 cache_images.go:92] LoadImages completed in 618.992888ms
	W0229 01:50:22.901872  170748 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0229 01:50:22.901965  170748 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:50:22.927874  170748 cni.go:84] Creating CNI manager for ""
	I0229 01:50:22.927913  170748 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 01:50:22.927947  170748 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 01:50:22.927976  170748 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.59 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-096771 NodeName:old-k8s-version-096771 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 01:50:22.928143  170748 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-096771"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.59
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.59"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-096771
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.59:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:50:22.928210  170748 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-096771 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:50:22.928260  170748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 01:50:22.938986  170748 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:50:22.939058  170748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:50:22.950276  170748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 01:50:22.967753  170748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 01:50:22.984290  170748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 01:50:23.004218  170748 ssh_runner.go:195] Run: grep 192.168.61.59	control-plane.minikube.internal$ /etc/hosts
	I0229 01:50:23.008311  170748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:50:23.021430  170748 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771 for IP: 192.168.61.59
	I0229 01:50:23.021456  170748 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:50:23.021635  170748 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:50:23.021677  170748 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:50:23.021756  170748 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/client.key
	I0229 01:50:23.021840  170748 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key.a8f3ad05
	I0229 01:50:23.021877  170748 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key
	I0229 01:50:23.021986  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:50:23.022013  170748 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:50:23.022022  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:50:23.022048  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:50:23.022071  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:50:23.022100  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:50:23.022135  170748 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:50:23.022882  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:50:23.049012  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:50:23.074887  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:50:23.100822  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/old-k8s-version-096771/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 01:50:23.126716  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:50:23.153082  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:50:23.177431  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:50:23.204352  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:50:23.230722  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:50:23.256686  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:50:23.282800  170748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:50:23.307212  170748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:50:23.325290  170748 ssh_runner.go:195] Run: openssl version
	I0229 01:50:23.331265  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:50:23.343376  170748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.348133  170748 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.348190  170748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:50:23.354103  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:50:23.366806  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:50:23.379906  170748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:50:23.385017  170748 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:50:23.385078  170748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:50:23.391011  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:50:23.402272  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:50:23.413428  170748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:50:23.418179  170748 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:50:23.418236  170748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:50:23.424722  170748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:50:23.436141  170748 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:50:23.440684  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:50:23.446668  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:50:23.452732  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:50:23.458422  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:50:23.464700  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:50:23.471164  170748 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:50:23.477326  170748 kubeadm.go:404] StartCluster: {Name:old-k8s-version-096771 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-096771 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.59 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:50:23.477484  170748 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:50:23.494945  170748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:50:23.506615  170748 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:50:23.506639  170748 kubeadm.go:636] restartCluster start
	I0229 01:50:23.506706  170748 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:50:23.518001  170748 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:23.519098  170748 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-096771" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:50:23.519901  170748 kubeconfig.go:146] "old-k8s-version-096771" context is missing from /home/jenkins/minikube-integration/18063-115328/kubeconfig - will repair!
	I0229 01:50:23.520702  170748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:50:23.522330  170748 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:50:23.533246  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:23.533295  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:23.547295  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:24.033750  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:24.033860  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:24.048564  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:24.533769  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:24.533891  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:24.548210  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:25.033378  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:25.033481  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:25.047506  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:25.534217  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:25.534335  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:25.550890  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:26.033425  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:26.033520  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:26.047977  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:26.533543  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:26.533640  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:26.548562  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:27.033650  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:27.033745  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:27.047793  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:27.533320  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:27.533433  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:27.547542  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:28.034161  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:28.034267  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:28.048631  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:28.533849  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:28.533936  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:28.548657  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:29.034171  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:29.034265  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:29.049368  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:29.533966  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:29.534076  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:29.547968  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:30.033572  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:30.033674  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:30.047466  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:30.534086  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:30.534182  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:30.548476  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:31.034066  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:31.034153  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:31.048346  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:31.533730  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:31.533841  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:31.549151  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:32.033566  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:32.033679  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:32.047856  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:32.533836  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:32.533925  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:32.547887  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:33.033379  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:33.033498  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:33.047768  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:33.534214  170748 api_server.go:166] Checking apiserver status ...
	I0229 01:50:33.534311  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:50:33.547784  170748 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:50:33.547822  170748 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 01:50:33.547914  170748 kubeadm.go:1135] stopping kube-system containers ...
	I0229 01:50:33.547989  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:50:33.567667  170748 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 01:50:33.588594  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:50:33.599725  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:50:33.599798  170748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:50:33.610830  170748 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 01:50:33.610870  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:50:33.733711  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:50:34.539114  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:50:34.796840  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:50:34.891430  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:50:34.964998  170748 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:50:34.965100  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:35.466194  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:35.965393  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:36.465422  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:36.965974  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:37.465174  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:37.965848  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:38.465674  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:38.965576  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:39.465219  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:39.965435  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:40.465946  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:40.966083  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:41.466124  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:41.966024  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:42.465743  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:42.965896  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:43.465835  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:43.965939  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:44.466072  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:44.965165  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:45.466017  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:45.965264  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:46.465265  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:46.965970  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:47.465408  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:47.965664  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:48.465920  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:48.966034  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:49.465387  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:49.965207  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:50.465349  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:50.965835  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:51.466028  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:51.965697  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:52.465250  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:52.965342  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:53.466191  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:53.965970  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:54.465244  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:54.965912  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:55.465211  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:55.965732  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:56.465252  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:56.965992  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:57.466029  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:57.965921  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:58.466025  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:58.966040  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:59.465561  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:50:59.965330  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:00.465921  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:00.965254  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:01.465271  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:01.965887  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:02.465255  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:02.966024  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:03.465865  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:03.965960  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:04.465548  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:04.966221  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:05.465441  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:05.965265  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:06.465603  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:06.966149  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:07.465153  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:07.965207  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:08.465229  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:08.965912  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:09.465543  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:09.965180  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:10.466242  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:10.965288  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:11.466150  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:11.965393  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:12.466228  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:12.965796  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:13.465820  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:13.965330  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:14.465273  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:14.965446  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:15.465402  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:15.965921  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:16.466070  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:16.965563  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:17.465267  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:17.965245  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:18.465966  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:18.965338  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:19.465528  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:19.965211  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:20.465246  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:20.965445  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:21.465269  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:21.965147  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:22.465889  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:22.965822  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:23.466164  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:23.965977  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:24.465644  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:24.965725  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:25.465229  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:25.966014  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:26.465923  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:26.965484  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:27.466090  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:27.965402  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:28.466240  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:28.965324  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:29.465334  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:29.965241  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:30.465451  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:30.965254  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:31.466030  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:31.965884  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:32.465850  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:32.965614  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:33.465942  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:33.965546  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:34.465920  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:34.965492  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:34.985049  170748 logs.go:276] 0 containers: []
	W0229 01:51:34.985079  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:34.985144  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:35.007651  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.007682  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:35.007740  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:35.034642  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.034674  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:35.034735  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:35.053856  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.053886  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:35.053943  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:35.073702  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.073731  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:35.073799  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:35.095080  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.095117  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:35.095177  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:35.113157  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.113187  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:35.113240  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:35.130361  170748 logs.go:276] 0 containers: []
	W0229 01:51:35.130385  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:35.130395  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:35.130411  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:35.192820  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:35.192852  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:35.241892  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:35.241925  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:35.257319  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:35.257357  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:35.328720  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:35.328747  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:35.328765  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:37.870792  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:37.885272  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:37.902782  170748 logs.go:276] 0 containers: []
	W0229 01:51:37.902815  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:37.902882  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:37.923509  170748 logs.go:276] 0 containers: []
	W0229 01:51:37.923545  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:37.923611  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:37.942614  170748 logs.go:276] 0 containers: []
	W0229 01:51:37.942648  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:37.942717  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:37.961011  170748 logs.go:276] 0 containers: []
	W0229 01:51:37.961045  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:37.961099  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:37.979241  170748 logs.go:276] 0 containers: []
	W0229 01:51:37.979272  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:37.979330  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:38.003419  170748 logs.go:276] 0 containers: []
	W0229 01:51:38.003448  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:38.003506  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:38.027800  170748 logs.go:276] 0 containers: []
	W0229 01:51:38.027828  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:38.027892  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:38.049572  170748 logs.go:276] 0 containers: []
	W0229 01:51:38.049612  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:38.049626  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:38.049641  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:38.127058  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:38.127096  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:38.187258  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:38.187295  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:38.203937  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:38.203968  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:38.282960  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:38.282987  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:38.283004  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:40.844461  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:40.857672  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:40.876050  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.876088  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:40.876142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:40.894808  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.894837  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:40.894899  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:40.910916  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.910940  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:40.910997  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:40.928125  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.928152  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:40.928205  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:40.949046  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.949074  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:40.949131  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:40.969812  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.969845  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:40.969913  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:40.998796  170748 logs.go:276] 0 containers: []
	W0229 01:51:40.998869  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:40.998946  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:41.014597  170748 logs.go:276] 0 containers: []
	W0229 01:51:41.014635  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:41.014653  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:41.014666  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:41.081992  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:41.082035  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:41.101457  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:41.101486  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:41.179694  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:41.179736  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:41.179752  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:41.237601  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:41.237642  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:43.803060  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:43.820456  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:43.843053  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.843084  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:43.843150  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:43.862412  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.862443  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:43.862499  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:43.883019  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.883053  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:43.883113  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:43.903961  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.903992  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:43.904050  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:43.926540  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.926571  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:43.926634  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:43.948501  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.948533  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:43.948602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:43.969161  170748 logs.go:276] 0 containers: []
	W0229 01:51:43.969190  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:43.969252  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:44.002322  170748 logs.go:276] 0 containers: []
	W0229 01:51:44.002354  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:44.002368  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:44.002385  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:44.024152  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:44.024191  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:44.154034  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:44.154125  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:44.154155  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:44.213102  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:44.213139  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:44.292548  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:44.292585  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:46.868320  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:46.887248  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:46.910496  170748 logs.go:276] 0 containers: []
	W0229 01:51:46.910527  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:46.910583  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:46.933697  170748 logs.go:276] 0 containers: []
	W0229 01:51:46.933725  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:46.933803  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:46.956641  170748 logs.go:276] 0 containers: []
	W0229 01:51:46.956672  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:46.956730  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:46.980162  170748 logs.go:276] 0 containers: []
	W0229 01:51:46.980194  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:46.980255  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:47.002204  170748 logs.go:276] 0 containers: []
	W0229 01:51:47.002237  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:47.002299  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:47.025722  170748 logs.go:276] 0 containers: []
	W0229 01:51:47.025756  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:47.025846  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:47.051123  170748 logs.go:276] 0 containers: []
	W0229 01:51:47.051156  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:47.051217  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:47.072229  170748 logs.go:276] 0 containers: []
	W0229 01:51:47.072257  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:47.072268  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:47.072279  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:47.138260  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:47.138290  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:47.197353  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:47.197391  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:47.219253  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:47.219294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:47.338950  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:47.338988  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:47.339006  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:49.895380  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:49.910848  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:49.929527  170748 logs.go:276] 0 containers: []
	W0229 01:51:49.929554  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:49.929602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:49.949518  170748 logs.go:276] 0 containers: []
	W0229 01:51:49.949548  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:49.949615  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:49.969194  170748 logs.go:276] 0 containers: []
	W0229 01:51:49.969226  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:49.969314  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:49.992508  170748 logs.go:276] 0 containers: []
	W0229 01:51:49.992532  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:49.992591  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:50.012238  170748 logs.go:276] 0 containers: []
	W0229 01:51:50.012340  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:50.012418  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:50.031920  170748 logs.go:276] 0 containers: []
	W0229 01:51:50.031949  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:50.032009  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:50.052823  170748 logs.go:276] 0 containers: []
	W0229 01:51:50.052853  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:50.052917  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:50.072208  170748 logs.go:276] 0 containers: []
	W0229 01:51:50.072239  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:50.072253  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:50.072269  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:50.124612  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:50.124648  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:50.142613  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:50.142645  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:50.227040  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:50.227065  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:50.227083  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:50.293808  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:50.293858  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:52.882560  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:52.898003  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:52.920711  170748 logs.go:276] 0 containers: []
	W0229 01:51:52.920754  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:52.920812  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:52.943161  170748 logs.go:276] 0 containers: []
	W0229 01:51:52.943188  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:52.943247  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:52.961549  170748 logs.go:276] 0 containers: []
	W0229 01:51:52.961577  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:52.961632  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:52.979415  170748 logs.go:276] 0 containers: []
	W0229 01:51:52.979447  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:52.979515  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:52.997382  170748 logs.go:276] 0 containers: []
	W0229 01:51:52.997405  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:52.997465  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:53.018397  170748 logs.go:276] 0 containers: []
	W0229 01:51:53.018427  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:53.018486  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:53.042700  170748 logs.go:276] 0 containers: []
	W0229 01:51:53.042729  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:53.042792  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:53.062065  170748 logs.go:276] 0 containers: []
	W0229 01:51:53.062094  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:53.062108  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:53.062123  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:53.112749  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:53.112785  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:53.127066  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:53.127092  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:53.192452  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:53.192476  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:53.192490  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:53.255956  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:53.255994  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:55.831066  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:55.845598  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:55.864446  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.864473  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:55.864532  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:55.884086  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.884114  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:55.884180  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:55.901829  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.901855  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:55.901907  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:55.920288  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.920342  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:55.920397  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:55.941354  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.941381  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:55.941442  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:55.962516  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.962542  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:55.962599  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:55.983903  170748 logs.go:276] 0 containers: []
	W0229 01:51:55.983931  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:55.983980  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:56.001088  170748 logs.go:276] 0 containers: []
	W0229 01:51:56.001119  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:56.001133  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:56.001154  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:56.015686  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:56.015714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:56.090625  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:51:56.090650  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:56.090667  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:56.139233  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:56.139272  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:56.207486  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:56.207519  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:58.780604  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:51:58.793905  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:51:58.813291  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.813323  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:51:58.813403  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:51:58.830083  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.830111  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:51:58.830168  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:51:58.845751  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.845774  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:51:58.845845  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:51:58.862591  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.862615  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:51:58.862664  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:51:58.880068  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.880095  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:51:58.880148  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:51:58.896474  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.896496  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:51:58.896538  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:51:58.913019  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.913051  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:51:58.913122  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:51:58.931267  170748 logs.go:276] 0 containers: []
	W0229 01:51:58.931292  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:51:58.931306  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:51:58.931321  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:51:58.977284  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:51:58.977316  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:51:59.035956  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:51:59.035995  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:51:59.088260  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:51:59.088294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:51:59.102829  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:51:59.102858  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:51:59.174088  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:01.674561  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:01.694810  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:01.716713  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.716801  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:01.716866  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:01.738076  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.738106  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:01.738188  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:01.762614  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.762645  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:01.762708  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:01.780979  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.781007  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:01.781072  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:01.802738  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.802805  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:01.802871  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:01.826522  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.826554  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:01.826624  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:01.850369  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.850403  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:01.850469  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:01.874796  170748 logs.go:276] 0 containers: []
	W0229 01:52:01.874824  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:01.874839  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:01.874853  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:01.926006  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:01.926042  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:01.991350  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:01.991379  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:02.050363  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:02.050413  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:02.068081  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:02.068117  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:02.149317  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:04.650196  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:04.665252  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:04.682858  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.682889  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:04.682949  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:04.702394  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.702423  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:04.702483  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:04.720298  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.720320  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:04.720369  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:04.737563  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.737586  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:04.737631  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:04.754987  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.755015  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:04.755079  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:04.774374  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.774406  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:04.774465  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:04.792125  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.792156  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:04.792207  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:04.811577  170748 logs.go:276] 0 containers: []
	W0229 01:52:04.811622  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:04.811639  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:04.811656  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:04.863372  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:04.863413  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:04.878503  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:04.878538  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:04.949928  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:04.949953  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:04.949968  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:04.996230  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:04.996260  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:07.564767  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:07.580319  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:07.600558  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.600587  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:07.600649  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:07.621608  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.621639  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:07.621698  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:07.641969  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.642002  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:07.642069  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:07.662057  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.662084  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:07.662149  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:07.683880  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.683912  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:07.683972  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:07.706467  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.706491  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:07.706540  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:07.728518  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.728545  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:07.728594  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:07.751769  170748 logs.go:276] 0 containers: []
	W0229 01:52:07.751796  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:07.751808  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:07.751822  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:07.813951  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:07.813985  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:07.829599  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:07.829668  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:07.896707  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:07.896736  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:07.896749  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:07.945289  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:07.945319  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:10.521192  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:10.540001  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:10.586998  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.587029  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:10.587095  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:10.612805  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.612834  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:10.612900  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:10.632986  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.633017  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:10.633070  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:10.651530  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.651562  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:10.651629  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:10.671302  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.671331  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:10.671390  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:10.692240  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.692266  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:10.692350  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:10.715488  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.715516  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:10.715567  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:10.736678  170748 logs.go:276] 0 containers: []
	W0229 01:52:10.736707  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:10.736720  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:10.736737  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:10.807303  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:10.807325  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:10.858062  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:10.858108  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:10.877749  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:10.877804  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:10.946950  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:10.946974  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:10.946988  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:13.494867  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:13.512321  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:13.536400  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.536435  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:13.536509  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:13.568837  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.568889  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:13.568953  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:13.596828  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.596862  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:13.596926  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:13.618401  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.618432  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:13.618487  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:13.643359  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.643389  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:13.643445  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:13.664599  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.664631  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:13.664692  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:13.684250  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.684283  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:13.684337  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:13.703636  170748 logs.go:276] 0 containers: []
	W0229 01:52:13.703675  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:13.703688  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:13.703702  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:13.753416  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:13.753456  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:13.834367  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:13.834402  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:13.905691  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:13.905735  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:13.924378  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:13.924423  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:13.997483  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:16.498308  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:16.516416  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:16.540114  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.540147  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:16.540224  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:16.564704  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.564738  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:16.564801  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:16.596395  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.596430  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:16.596528  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:16.631590  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.631627  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:16.631693  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:16.651423  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.651459  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:16.651521  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:16.674761  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.674793  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:16.674855  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:16.694224  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.694252  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:16.694314  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:16.715672  170748 logs.go:276] 0 containers: []
	W0229 01:52:16.715701  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:16.715713  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:16.715728  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:16.798325  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:16.798350  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:16.798366  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:16.853595  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:16.853649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:16.929988  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:16.930028  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:16.990201  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:16.990239  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:19.508175  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:19.526082  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:19.547449  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.547474  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:19.547531  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:19.575610  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.575640  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:19.575689  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:19.602062  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.602091  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:19.602155  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:19.619541  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.619567  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:19.619626  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:19.638404  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.638437  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:19.638496  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:19.655640  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.655674  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:19.655753  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:19.673851  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.673880  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:19.673938  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:19.692365  170748 logs.go:276] 0 containers: []
	W0229 01:52:19.692390  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:19.692401  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:19.692410  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:19.733873  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:19.733906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:19.795578  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:19.795606  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:19.847022  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:19.847061  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:19.864803  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:19.864834  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:19.944284  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:22.444573  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:22.467142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:22.508494  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.508528  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:22.508587  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:22.546006  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.546039  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:22.546102  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:22.571383  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.571470  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:22.571545  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:22.604153  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.604255  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:22.604354  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:22.630705  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.630749  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:22.630811  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:22.655079  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.655105  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:22.655170  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:22.678045  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.678077  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:22.678137  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:22.700192  170748 logs.go:276] 0 containers: []
	W0229 01:52:22.700245  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:22.700259  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:22.700280  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:22.767947  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:22.767984  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:22.786475  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:22.786515  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:22.877181  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:22.877218  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:22.877235  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:22.938852  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:22.938901  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:25.513019  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:25.533843  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:25.573115  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.573161  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:25.573255  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:25.607043  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.607074  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:25.607133  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:25.627659  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.627749  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:25.627831  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:25.648576  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.648609  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:25.648669  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:25.668455  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.668489  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:25.668542  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:25.691464  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.691487  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:25.691543  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:25.712573  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.712602  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:25.712689  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:25.736470  170748 logs.go:276] 0 containers: []
	W0229 01:52:25.736503  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:25.736518  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:25.736564  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:25.815715  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:25.815767  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:25.835403  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:25.835438  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:25.916237  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:25.916258  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:25.916271  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:25.970291  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:25.970325  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:28.530564  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:28.547699  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:28.580796  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.580828  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:28.580890  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:28.604128  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.604160  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:28.604223  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:28.626642  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.626674  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:28.626724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:28.648931  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.648968  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:28.649032  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:28.668950  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.668978  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:28.669042  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:28.692085  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.692113  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:28.692181  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:28.716085  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.716108  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:28.716181  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:28.738696  170748 logs.go:276] 0 containers: []
	W0229 01:52:28.738721  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:28.738735  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:28.738756  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:28.790781  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:28.790822  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:28.805525  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:28.805569  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:28.871608  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:28.871633  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:28.871649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:28.922282  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:28.922326  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:31.488576  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:31.506595  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:31.529639  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.529674  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:31.529759  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:31.552271  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.552300  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:31.552374  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:31.576531  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.576560  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:31.576620  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:31.601687  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.601718  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:31.601799  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:31.637052  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.637087  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:31.637151  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:31.660324  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.660357  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:31.660424  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:31.680267  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.680295  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:31.680346  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:31.700747  170748 logs.go:276] 0 containers: []
	W0229 01:52:31.700775  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:31.700786  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:31.700798  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:31.768229  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:31.768279  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:31.784150  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:31.784188  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:31.853680  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:31.853703  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:31.853724  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:31.905934  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:31.905972  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:34.474290  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:34.490191  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:34.509380  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.509411  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:34.509470  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:34.526036  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.526066  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:34.526127  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:34.547095  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.547124  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:34.547179  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:34.569932  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.569959  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:34.570021  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:34.596717  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.596750  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:34.596806  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:34.616328  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.616351  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:34.616400  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:34.633624  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.633652  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:34.633711  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:34.650683  170748 logs.go:276] 0 containers: []
	W0229 01:52:34.650708  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:34.650719  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:34.650730  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:34.665531  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:34.665563  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:34.737508  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:34.737531  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:34.737547  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:34.780072  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:34.780106  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:34.838719  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:34.838747  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:37.395822  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:37.410480  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:37.431521  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.431545  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:37.431602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:37.451385  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.451412  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:37.451467  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:37.469394  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.469422  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:37.469481  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:37.502282  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.502309  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:37.502375  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:37.526124  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.526156  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:37.526225  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:37.551310  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.551332  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:37.551389  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:37.573127  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.573152  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:37.573208  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:37.598449  170748 logs.go:276] 0 containers: []
	W0229 01:52:37.598481  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:37.598495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:37.598511  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:37.622051  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:37.622080  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:37.700451  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:37.700483  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:37.700500  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:37.744763  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:37.744796  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:37.805712  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:37.805743  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:40.357573  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:40.373292  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:40.392025  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.392059  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:40.392119  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:40.410086  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.410122  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:40.410183  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:40.429758  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.429800  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:40.429865  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:40.448405  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.448432  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:40.448504  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:40.466219  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.466242  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:40.466293  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:40.492577  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.492602  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:40.492662  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:40.530380  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.530412  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:40.530473  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:40.562377  170748 logs.go:276] 0 containers: []
	W0229 01:52:40.562408  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:40.562423  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:40.562440  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:40.628537  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:40.628567  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:40.645221  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:40.645246  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:40.727339  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:40.727368  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:40.727384  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:40.771461  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:40.771488  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:43.337139  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:43.351145  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:43.369565  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.369589  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:43.369639  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:43.388120  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.388144  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:43.388217  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:43.407980  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.408006  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:43.408056  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:43.426357  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.426379  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:43.426438  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:43.443894  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.443922  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:43.443975  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:43.462431  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.462460  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:43.462513  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:43.491147  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.491179  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:43.491246  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:43.514040  170748 logs.go:276] 0 containers: []
	W0229 01:52:43.514069  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:43.514084  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:43.514099  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:43.589272  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:43.589319  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:43.615113  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:43.615152  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:43.716987  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:43.717013  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:43.717028  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:43.780118  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:43.780163  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:46.365926  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:46.381672  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:46.405062  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.405091  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:46.405152  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:46.426719  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.426754  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:46.426819  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:46.447140  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.447171  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:46.447246  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:46.464512  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.464544  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:46.464596  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:46.496452  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.496492  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:46.496555  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:46.528226  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.528250  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:46.528303  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:46.561583  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.561621  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:46.561685  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:46.591649  170748 logs.go:276] 0 containers: []
	W0229 01:52:46.591684  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:46.591697  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:46.591714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:46.658495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:46.658540  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:46.677168  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:46.677199  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:46.768307  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:46.796371  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:46.796399  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:46.842519  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:46.842567  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:49.403340  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:49.416954  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:49.434235  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.434261  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:49.434310  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:49.452715  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.452740  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:49.452785  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:49.470194  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.470233  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:49.470300  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:49.506192  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.506225  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:49.506283  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:49.531231  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.531263  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:49.531327  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:49.559916  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.559946  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:49.560009  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:49.584965  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.584999  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:49.585061  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:49.602196  170748 logs.go:276] 0 containers: []
	W0229 01:52:49.602226  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:49.602241  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:49.602254  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:49.653436  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:49.653467  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:49.669278  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:49.669313  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:49.736385  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:49.736408  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:49.736424  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:49.788419  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:49.788454  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:52.352442  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:52.366416  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:52.386840  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.386868  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:52.386921  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:52.405299  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.405323  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:52.405370  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:52.422513  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.422542  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:52.422592  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:52.440070  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.440094  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:52.440138  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:52.456585  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.456608  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:52.456651  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:52.474314  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.474343  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:52.474411  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:52.502709  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.502738  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:52.502818  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:52.533426  170748 logs.go:276] 0 containers: []
	W0229 01:52:52.533462  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:52.533478  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:52.533496  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:52.638902  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:52.638924  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:52.638938  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:52.680959  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:52.680991  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:52.756680  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:52.756711  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:52.809797  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:52.809834  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:55.326488  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:55.347256  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:55.370286  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.370334  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:55.370415  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:55.389846  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.389875  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:55.389925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:55.408826  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.408858  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:55.408920  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:55.426449  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.426480  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:55.426542  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:55.446632  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.446712  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:55.446776  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:55.466002  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.466026  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:55.466086  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:55.499450  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.499483  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:55.499546  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:55.522549  170748 logs.go:276] 0 containers: []
	W0229 01:52:55.522582  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:55.522596  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:55.522625  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:55.643617  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:55.643647  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:55.643665  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:52:55.700195  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:55.700237  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:55.771130  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:55.771171  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:55.836682  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:55.836722  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:58.353736  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:52:58.372087  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:52:58.393611  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.393644  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:52:58.393703  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:52:58.416948  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.416986  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:52:58.417052  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:52:58.438765  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.438793  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:52:58.438863  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:52:58.459096  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.459123  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:52:58.459171  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:52:58.480691  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.480729  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:52:58.480793  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:52:58.514491  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.514526  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:52:58.514587  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:52:58.549697  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.549731  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:52:58.549814  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:52:58.569771  170748 logs.go:276] 0 containers: []
	W0229 01:52:58.569819  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:52:58.569832  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:52:58.569852  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:52:58.646315  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:52:58.646347  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:52:58.695740  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:52:58.695773  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:52:58.710788  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:52:58.710815  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:52:58.780579  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:52:58.780599  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:52:58.780611  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:01.328441  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:01.343305  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:01.363779  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.363812  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:01.363878  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:01.384882  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.384915  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:01.384984  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:01.404161  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.404200  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:01.404266  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:01.422442  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.422472  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:01.422553  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:01.441080  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.441108  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:01.441172  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:01.459849  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.459882  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:01.459971  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:01.478790  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.478820  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:01.478881  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:01.502943  170748 logs.go:276] 0 containers: []
	W0229 01:53:01.502979  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:01.502993  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:01.503009  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:01.568762  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:01.568809  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:01.593960  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:01.594002  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:01.694674  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:01.694702  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:01.694721  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:01.743026  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:01.743076  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:04.320484  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:04.334170  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:04.358090  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.358114  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:04.358161  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:04.375394  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.375421  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:04.375471  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:04.392959  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.392981  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:04.393029  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:04.412915  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.412949  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:04.413030  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:04.432914  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.432936  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:04.432990  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:04.452918  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.452946  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:04.453006  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:04.472009  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.472039  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:04.472121  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:04.508440  170748 logs.go:276] 0 containers: []
	W0229 01:53:04.508469  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:04.508484  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:04.508500  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:04.605584  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:04.605607  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:04.605621  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:04.653122  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:04.653161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:04.713785  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:04.713821  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:04.763752  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:04.763783  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:07.278917  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:07.294267  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:07.312292  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.312320  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:07.312369  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:07.329588  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.329625  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:07.329681  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:07.346841  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.346875  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:07.346943  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:07.365530  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.365556  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:07.365606  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:07.382584  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.382612  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:07.382659  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:07.400221  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.400243  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:07.400296  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:07.417320  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.417342  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:07.417386  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:07.434198  170748 logs.go:276] 0 containers: []
	W0229 01:53:07.434219  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:07.434230  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:07.434242  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:07.503749  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:07.503786  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:07.561501  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:07.561539  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:07.576970  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:07.577004  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:07.654722  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:07.654745  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:07.654759  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:10.200363  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:10.214321  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:10.232795  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.232821  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:10.232869  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:10.249842  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.249865  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:10.249910  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:10.265771  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.265809  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:10.265866  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:10.283880  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.283905  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:10.283946  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:10.300939  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.300966  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:10.301009  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:10.316794  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.316821  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:10.316887  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:10.333713  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.333735  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:10.333802  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:10.349668  170748 logs.go:276] 0 containers: []
	W0229 01:53:10.349691  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:10.349703  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:10.349722  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:10.390736  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:10.390769  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:10.450429  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:10.450456  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:10.510304  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:10.510353  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:10.526822  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:10.526848  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:10.603814  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:13.104454  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:13.121062  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:13.143969  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.144009  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:13.144100  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:13.166654  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.166683  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:13.166747  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:13.187835  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.187863  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:13.187925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:13.209971  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.210007  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:13.210066  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:13.230855  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.230888  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:13.230957  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:13.250532  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.250564  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:13.250619  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:13.266918  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.266943  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:13.266999  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:13.283682  170748 logs.go:276] 0 containers: []
	W0229 01:53:13.283715  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:13.283729  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:13.283744  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:13.341133  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:13.341170  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:13.355711  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:13.355741  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:13.425522  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:13.425546  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:13.425567  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:13.481546  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:13.481588  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:16.066332  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:16.083841  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:16.103028  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.103057  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:16.103117  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:16.125680  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.125705  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:16.125755  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:16.149753  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.149802  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:16.149865  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:16.173943  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.173981  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:16.174045  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:16.197463  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.197488  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:16.197541  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:16.224387  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.224421  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:16.224488  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:16.245773  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.245822  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:16.245882  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:16.269293  170748 logs.go:276] 0 containers: []
	W0229 01:53:16.269325  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:16.269339  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:16.269355  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:16.334053  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:16.334099  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:16.352866  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:16.352906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:16.432783  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:16.432808  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:16.432821  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:16.490574  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:16.490622  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:19.070564  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:19.085381  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:19.105614  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.105646  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:19.105708  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:19.123837  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.123867  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:19.123925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:19.145472  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.145497  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:19.145556  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:19.163654  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.163681  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:19.163737  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:19.182446  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.182476  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:19.182531  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:19.202662  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.202690  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:19.202762  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:19.221115  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.221138  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:19.221198  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:19.238522  170748 logs.go:276] 0 containers: []
	W0229 01:53:19.238551  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:19.238565  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:19.238607  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:19.296193  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:19.296228  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:19.313059  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:19.313105  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:19.401890  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:19.401917  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:19.401934  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:19.455428  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:19.455463  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:22.063032  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:22.078041  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:22.100472  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.100502  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:22.100569  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:22.121977  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.122009  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:22.122077  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:22.140484  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.140518  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:22.140580  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:22.158683  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.158716  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:22.158777  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:22.176598  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.176629  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:22.176685  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:22.195630  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.195664  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:22.195724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:22.216240  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.216268  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:22.216328  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:22.235173  170748 logs.go:276] 0 containers: []
	W0229 01:53:22.235202  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:22.235217  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:22.235232  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:22.291384  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:22.291420  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:22.309723  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:22.309765  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:22.384685  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:22.384708  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:22.384724  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:22.428173  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:22.428210  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:25.025598  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:25.039721  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:25.056799  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.056823  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:25.056878  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:25.074816  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.074847  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:25.074902  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:25.091898  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.091926  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:25.091975  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:25.111274  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.111296  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:25.111349  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:25.128253  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.128277  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:25.128325  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:25.144665  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.144685  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:25.144735  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:25.161553  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.161578  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:25.161627  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:25.177879  170748 logs.go:276] 0 containers: []
	W0229 01:53:25.177913  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:25.177931  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:25.177947  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:25.250813  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:25.250837  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:25.250853  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:25.293835  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:25.293868  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:25.352485  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:25.352514  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:25.402435  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:25.402472  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:27.926758  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:27.941112  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:27.958551  170748 logs.go:276] 0 containers: []
	W0229 01:53:27.958573  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:27.958622  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:27.974320  170748 logs.go:276] 0 containers: []
	W0229 01:53:27.974344  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:27.974434  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:27.990512  170748 logs.go:276] 0 containers: []
	W0229 01:53:27.990541  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:27.990600  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:28.007471  170748 logs.go:276] 0 containers: []
	W0229 01:53:28.007502  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:28.007549  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:28.024392  170748 logs.go:276] 0 containers: []
	W0229 01:53:28.024423  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:28.024481  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:28.042215  170748 logs.go:276] 0 containers: []
	W0229 01:53:28.042245  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:28.042302  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:28.058038  170748 logs.go:276] 0 containers: []
	W0229 01:53:28.058064  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:28.058121  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:28.074106  170748 logs.go:276] 0 containers: []
	W0229 01:53:28.074134  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:28.074148  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:28.074161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:28.123223  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:28.123260  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:28.137313  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:28.137342  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:28.204418  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:28.204446  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:28.204461  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:28.247305  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:28.247337  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:30.809592  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:30.825316  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:30.844970  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.844995  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:30.845045  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:30.864193  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.864220  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:30.864277  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:30.883132  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.883157  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:30.883213  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:30.905617  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.905643  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:30.905692  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:30.922542  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.922571  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:30.922650  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:30.940432  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.940455  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:30.940499  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:30.957127  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.957150  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:30.957195  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:30.976999  170748 logs.go:276] 0 containers: []
	W0229 01:53:30.977027  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:30.977041  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:30.977055  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:31.028563  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:31.028598  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:31.042529  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:31.042560  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:31.107474  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:31.107500  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:31.107515  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:31.154941  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:31.154983  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:33.714062  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:33.731960  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:33.752566  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.752589  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:33.752634  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:33.772179  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.772209  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:33.772273  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:33.793539  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.793564  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:33.793613  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:33.813308  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.813335  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:33.813390  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:33.834355  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.834387  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:33.834470  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:33.855931  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.855958  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:33.856011  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:33.873255  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.873284  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:33.873340  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:33.890985  170748 logs.go:276] 0 containers: []
	W0229 01:53:33.891016  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:33.891031  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:33.891060  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:33.906614  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:33.906653  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:33.985273  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:33.985301  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:33.985319  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:34.031347  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:34.031382  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:34.088901  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:34.088934  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:36.639046  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:36.655282  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:36.676203  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.676224  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:36.676269  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:36.698810  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.698833  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:36.698880  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:36.719730  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.719755  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:36.719807  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:36.748913  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.748937  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:36.749001  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:36.771905  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.771929  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:36.771974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:36.795209  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.795242  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:36.795305  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:36.818025  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.818055  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:36.818111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:36.845202  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.845228  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:36.845238  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:36.845249  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:36.863710  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:36.863746  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:36.941560  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:36.941585  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:36.941599  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:36.985345  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:36.985374  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:37.049297  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:37.049331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:39.600693  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:39.614787  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:39.637491  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.637520  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:39.637579  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:39.655913  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.655934  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:39.655974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:39.673860  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.673884  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:39.673948  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:39.694282  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.694306  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:39.694362  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:39.713273  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.713298  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:39.713354  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:39.738601  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.738637  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:39.738694  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:39.757911  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.757946  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:39.758003  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:39.785844  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.785876  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:39.785889  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:39.785923  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:39.890021  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:39.890046  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:39.890063  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:39.946696  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:39.946738  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:40.011265  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:40.011294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:40.061033  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:40.061066  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:42.579474  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:42.594968  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:42.614588  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.614619  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:42.614678  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:42.633590  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.633626  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:42.633675  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:42.650641  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.650670  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:42.650725  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:42.667825  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.667848  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:42.667896  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:42.687222  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.687250  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:42.687306  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:42.707192  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.707221  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:42.707283  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:42.727815  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.727842  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:42.727909  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:42.747315  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.747344  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:42.747358  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:42.747373  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:42.835128  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.835153  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:42.835166  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:42.878670  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:42.878704  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:42.938260  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:42.938295  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:42.988986  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:42.989023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.504852  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:45.519775  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:45.544878  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.544907  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:45.544956  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:45.564358  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.564392  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:45.564452  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:45.585154  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.585184  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:45.585248  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:45.605709  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.605739  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:45.605811  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:45.623803  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.623890  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:45.623962  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:45.643133  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.643164  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:45.643234  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:45.661762  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.661802  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:45.661861  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:45.680592  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.680620  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:45.680634  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:45.680649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:45.745642  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:45.745700  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:45.823069  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:45.823109  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:45.892445  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:45.892486  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.910297  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:45.910333  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:45.990129  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:48.491272  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:48.505184  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:48.525599  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.525629  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:48.525706  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:48.546500  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.546532  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:48.546594  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:48.568626  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.568658  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:48.568721  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:48.587381  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.587414  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:48.587473  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:48.605940  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.605978  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:48.606036  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:48.627862  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.627939  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:48.627990  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:48.647290  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.647337  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:48.647409  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:48.668387  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.668421  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:48.668436  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:48.668465  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:48.749495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:48.749564  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:48.768497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:48.768537  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:48.851955  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:48.851986  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:48.852007  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:48.897006  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:48.897051  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:51.469648  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:51.483142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:51.505315  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.505336  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:51.505382  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:51.527266  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.527291  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:51.527349  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:51.549665  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.549695  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:51.549762  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:51.567017  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.567048  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:51.567115  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:51.584257  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.584283  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:51.584330  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:51.601100  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.601120  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:51.601162  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:51.617334  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.617364  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:51.617412  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:51.634847  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.634870  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:51.634884  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:51.634906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:51.699822  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:51.699852  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:51.699874  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:51.748726  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:51.748767  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:51.821091  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:51.821125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:51.870732  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:51.870762  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.385901  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:54.399480  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:54.417966  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.417996  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:54.418059  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:54.436602  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.436625  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:54.436671  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:54.454846  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.454871  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:54.454929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:54.475020  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.475052  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:54.475106  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:54.492090  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.492124  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:54.492179  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:54.508529  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.508552  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:54.508612  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:54.525505  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.525532  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:54.525592  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:54.542182  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.542205  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:54.542217  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:54.542231  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:54.591034  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:54.591075  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.607014  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:54.607059  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:54.673259  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:54.673277  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:54.673294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:54.735883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:54.735933  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:57.304118  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:57.317352  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:57.334647  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.334674  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:57.334724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:57.354591  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.354620  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:57.354664  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:57.378535  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.378558  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:57.378613  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:57.398944  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.398973  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:57.399019  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:57.419479  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.419500  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:57.419544  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:57.435860  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.435888  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:57.435942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:57.453347  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.453383  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:57.453430  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:57.473140  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.473168  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:57.473182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:57.473196  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:57.526048  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:57.526079  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:57.541246  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:57.541271  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:57.616011  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:57.616037  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:57.616052  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:57.658815  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:57.658856  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.228028  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:00.242250  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:00.260188  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.260217  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:00.260277  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:00.279694  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.279717  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:00.279768  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:00.300245  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.300276  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:00.300331  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:00.321402  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.321423  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:00.321484  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:00.341221  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.341252  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:00.341309  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:00.359202  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.359228  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:00.359274  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:00.377486  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.377515  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:00.377566  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:00.396751  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.396780  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:00.396792  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:00.396804  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:00.411321  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:00.411354  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:00.486044  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:00.486070  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:00.486086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:00.533467  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:00.533493  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.601400  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:00.601429  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:03.160372  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:03.174216  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:03.193976  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.193997  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:03.194047  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:03.212210  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.212237  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:03.212282  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:03.229155  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.229178  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:03.229223  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:03.248201  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.248224  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:03.248287  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:03.267884  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.267908  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:03.267952  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:03.287746  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.287770  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:03.287821  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:03.306938  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.306967  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:03.307016  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:03.326486  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.326519  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:03.326534  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:03.326549  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:03.395132  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:03.395184  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:03.412879  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:03.412913  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:03.482097  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:03.482120  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:03.482132  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:03.525422  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:03.525455  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.083568  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:06.096663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:06.114370  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.114400  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:06.114445  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:06.131116  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.131136  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:06.131180  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:06.147183  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.147206  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:06.147261  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:06.163312  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.163335  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:06.163381  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:06.180224  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.180248  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:06.180302  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:06.197599  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.197627  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:06.197682  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:06.215691  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.215711  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:06.215756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:06.232575  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.232594  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:06.232606  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:06.232619  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:06.274143  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:06.274169  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.333535  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:06.333568  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:06.385263  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:06.385291  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:06.399965  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:06.399998  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:06.462490  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:08.962748  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:08.979756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:08.996761  170748 logs.go:276] 0 containers: []
	W0229 01:54:08.996786  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:08.996840  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:09.020061  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.020088  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:09.020144  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:09.042548  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.042578  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:09.042633  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:09.072428  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.072461  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:09.072525  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:09.089193  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.089216  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:09.089262  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:09.107143  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.107170  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:09.107220  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:09.125208  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.125228  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:09.125268  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:09.143488  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.143511  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:09.143522  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:09.143535  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:09.214360  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:09.214382  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:09.214395  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:09.256462  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:09.256492  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:09.312362  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:09.312392  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:09.362596  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:09.362630  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:11.880988  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:11.894918  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:11.915749  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.915777  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:11.915837  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:11.933269  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.933295  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:11.933388  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:11.950460  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.950483  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:11.950530  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:11.966919  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.966943  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:11.967004  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:11.987487  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.987519  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:11.987602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:12.011234  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.011265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:12.011324  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:12.039057  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.039083  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:12.039140  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:12.062016  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.062047  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:12.062061  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:12.062078  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.116706  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:12.116744  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:12.176126  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:12.176156  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:12.234175  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:12.234210  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:12.249559  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:12.249597  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:12.321806  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:14.822521  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.837453  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:14.857687  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.857723  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:14.857804  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:14.879933  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.879966  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:14.880025  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:14.903296  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.903334  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:14.903477  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:14.924603  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.924635  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:14.924697  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:14.943135  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.943159  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:14.943218  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:14.961231  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.961265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:14.961326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:14.993744  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.993786  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:14.993857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:15.013656  170748 logs.go:276] 0 containers: []
	W0229 01:54:15.013686  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:15.013700  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:15.013714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:15.092540  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:15.092576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:15.162362  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:15.162406  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:15.178584  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:15.178612  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:15.256534  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:15.256560  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:15.256576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:17.803447  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:17.818754  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:17.838257  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.838289  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:17.838351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:17.859095  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.859128  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:17.859188  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:17.880186  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.880219  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:17.880281  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:17.905367  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.905415  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:17.905476  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:17.926888  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.926913  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:17.926974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:17.948858  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.948884  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:17.948941  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:17.967835  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.967871  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:17.967930  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:17.999903  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.999935  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:17.999949  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:17.999963  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:18.066021  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:18.066065  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:18.091596  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:18.091621  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:18.167407  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:18.167429  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:18.167444  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:18.212978  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:18.213013  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:20.785493  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.802351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:20.825685  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.825720  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:20.825770  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:20.849013  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.849043  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:20.849111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:20.871166  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.871198  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:20.871249  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:20.889932  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.889963  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:20.890022  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:20.912390  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.912416  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:20.912492  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:20.931206  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.931233  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:20.931291  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:20.949663  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.949687  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:20.949739  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:20.967249  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.967277  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:20.967288  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:20.967299  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:21.062400  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:21.062428  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:21.062445  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:21.113883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:21.113924  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:21.180620  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:21.180659  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:21.236555  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:21.236589  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:23.754280  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:23.768586  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:23.793150  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.793172  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:23.793221  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:23.818865  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.818896  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:23.818949  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:23.838078  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.838105  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:23.838161  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:23.859213  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.859235  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:23.859279  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:23.878876  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.878901  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:23.878938  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:23.899317  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.899340  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:23.899387  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:23.916826  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.916851  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:23.916891  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:23.933713  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.933739  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:23.933752  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:23.933766  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:24.003099  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:24.003136  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:24.021001  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:24.021038  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:24.097013  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:24.097035  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:24.097050  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:24.145682  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:24.145714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:26.710373  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:26.724077  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:26.740532  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.740556  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:26.740603  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:26.758229  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.758251  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:26.758294  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:26.774881  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.774904  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:26.774971  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:26.790893  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.790913  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:26.790953  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:26.807273  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.807300  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:26.807359  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:26.824081  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.824107  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:26.824165  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:26.840770  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.840793  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:26.840851  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:26.856932  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.856966  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:26.856980  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:26.856995  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:26.907299  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:26.907331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:26.922552  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:26.922585  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:26.999079  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:26.999109  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:26.999125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:27.051061  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:27.051098  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:29.607727  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:29.622929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:29.641829  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.641861  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:29.641932  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:29.658732  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.658761  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:29.658825  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:29.676597  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.676619  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:29.676663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:29.695001  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.695030  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:29.695089  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:29.711947  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.711982  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:29.712038  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:29.728832  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.728860  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:29.728925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:29.744888  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.744907  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:29.744951  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:29.761144  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.761169  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:29.761182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:29.761192  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:29.810791  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:29.810823  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:29.824497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:29.824527  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:29.890825  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:29.890849  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:29.890865  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:29.934980  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:29.935023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:32.508161  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:32.523715  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:32.541751  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.541796  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:32.541860  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:32.559746  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.559772  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:32.559826  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:32.578867  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.578890  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:32.578942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:32.596025  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.596050  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:32.596104  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:32.613250  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.613277  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:32.613326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:32.629760  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.629808  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:32.629867  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:32.646940  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.646962  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:32.647034  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:32.666140  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.666167  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:32.666180  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:32.666194  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:32.718171  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:32.718206  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:32.732695  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:32.732720  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:32.796621  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:32.796642  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:32.796657  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:32.839872  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:32.839908  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:35.396632  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:35.412053  170748 kubeadm.go:640] restartCluster took 4m11.905401704s
	W0229 01:54:35.412153  170748 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 01:54:35.412183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:54:35.838651  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:35.854409  170748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:35.865129  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:35.875642  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:35.875696  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:54:36.022349  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:54:36.059938  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:54:36.131386  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:56:32.235880  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:56:32.236029  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:56:32.238423  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:56:32.238502  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:56:32.238599  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:56:32.238744  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:56:32.238904  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:56:32.239073  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:56:32.239200  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:56:32.239271  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:56:32.239350  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:56:32.241126  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:56:32.241192  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:56:32.241251  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:56:32.241317  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:56:32.241394  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:56:32.241469  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:56:32.241523  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:56:32.241605  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:56:32.241700  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:56:32.241811  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:56:32.241921  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:56:32.242001  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:56:32.242081  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:56:32.242164  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:56:32.242247  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:56:32.242344  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:56:32.242427  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:56:32.242484  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:56:32.244633  170748 out.go:204]   - Booting up control plane ...
	I0229 01:56:32.244727  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:56:32.244807  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:56:32.244884  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:56:32.244992  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:56:32.245189  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:56:32.245267  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:56:32.245360  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245532  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245599  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245746  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245826  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245998  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246093  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246273  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246359  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246574  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246588  170748 kubeadm.go:322] 
	I0229 01:56:32.246630  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:56:32.246679  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:56:32.246693  170748 kubeadm.go:322] 
	I0229 01:56:32.246740  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:56:32.246791  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:56:32.246892  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:56:32.246905  170748 kubeadm.go:322] 
	I0229 01:56:32.247026  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:56:32.247072  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:56:32.247116  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:56:32.247124  170748 kubeadm.go:322] 
	I0229 01:56:32.247212  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:56:32.247289  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:56:32.247361  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:56:32.247406  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:56:32.247488  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:56:32.247541  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 01:56:32.247677  170748 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:56:32.247732  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:56:32.689675  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:32.704123  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:56:32.713829  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:56:32.713881  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:56:32.847290  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:56:32.879658  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:56:32.959513  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:58:29.528786  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:58:29.528884  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:58:29.530491  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:58:29.530596  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:58:29.530680  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:58:29.530764  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:58:29.530861  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:58:29.530964  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:58:29.531068  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:58:29.531119  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:58:29.531176  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:58:29.532944  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:58:29.533047  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:58:29.533144  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:58:29.533247  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:58:29.533305  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:58:29.533379  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:58:29.533441  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:58:29.533506  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:58:29.533567  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:58:29.533636  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:58:29.533700  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:58:29.533744  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:58:29.533806  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:58:29.533878  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:58:29.533967  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:58:29.534067  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:58:29.534153  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:58:29.534217  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:58:29.535694  170748 out.go:204]   - Booting up control plane ...
	I0229 01:58:29.535778  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:58:29.535844  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:58:29.535904  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:58:29.535972  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:58:29.536127  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:58:29.536212  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:58:29.536285  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536458  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536538  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536729  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536791  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536941  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537007  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537189  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537267  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537495  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537513  170748 kubeadm.go:322] 
	I0229 01:58:29.537569  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:58:29.537626  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:58:29.537636  170748 kubeadm.go:322] 
	I0229 01:58:29.537685  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:58:29.537744  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:58:29.537903  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:58:29.537915  170748 kubeadm.go:322] 
	I0229 01:58:29.538065  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:58:29.538113  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:58:29.538174  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:58:29.538183  170748 kubeadm.go:322] 
	I0229 01:58:29.538325  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:58:29.538450  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:58:29.538581  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:58:29.538656  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:58:29.538743  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:58:29.538829  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:58:29.538866  170748 kubeadm.go:406] StartCluster complete in 8m6.061536028s
	I0229 01:58:29.538947  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:58:29.556117  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.556141  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:58:29.556205  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:58:29.572791  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.572812  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:58:29.572857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:58:29.589544  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.589565  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:58:29.589625  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:58:29.605410  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.605426  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:58:29.605472  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:58:29.621393  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.621412  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:58:29.621450  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:58:29.637671  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.637690  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:58:29.637732  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:58:29.653501  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.653533  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:58:29.653590  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:58:29.669033  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.669058  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:58:29.669072  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:58:29.669086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:58:29.722126  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:58:29.722161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:58:29.735919  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:58:29.735946  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:58:29.803585  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:58:29.803615  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:58:29.803629  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:58:29.843153  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:58:29.843183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 01:58:29.906091  170748 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:58:29.906150  170748 out.go:239] * 
	* 
	W0229 01:58:29.906209  170748 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.906231  170748 out.go:239] * 
	* 
	W0229 01:58:29.906995  170748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:58:29.910220  170748 out.go:177] 
	W0229 01:58:29.911536  170748 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.911581  170748 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:58:29.911600  170748 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:58:29.912937  170748 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-096771 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (261.925267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096771 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-384331 image list                          | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-133807             | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-133807                  | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-133807 image list                           | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| image   | no-preload-449532 image list                           | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| image   | default-k8s-diff-port-308557                           | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:53:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:53:36.885660  172338 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:53:36.885812  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.885823  172338 out.go:304] Setting ErrFile to fd 2...
	I0229 01:53:36.885830  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.886451  172338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:53:36.887445  172338 out.go:298] Setting JSON to false
	I0229 01:53:36.888850  172338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5768,"bootTime":1709165849,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:53:36.888922  172338 start.go:139] virtualization: kvm guest
	I0229 01:53:36.890884  172338 out.go:177] * [newest-cni-133807] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:53:36.892679  172338 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:53:36.893863  172338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:53:36.892754  172338 notify.go:220] Checking for updates...
	I0229 01:53:36.895149  172338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:53:36.896330  172338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:53:36.897604  172338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:53:36.898902  172338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:53:36.900711  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:36.901271  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.901326  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.917325  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0229 01:53:36.917751  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.918470  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.918496  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.918925  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.919139  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.919426  172338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:53:36.919862  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.919920  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.935501  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0229 01:53:36.935929  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.936397  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.936423  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.936740  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.936966  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.975046  172338 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:53:36.976294  172338 start.go:299] selected driver: kvm2
	I0229 01:53:36.976310  172338 start.go:903] validating driver "kvm2" against &{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false n
ode_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.976488  172338 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:53:36.977258  172338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.977350  172338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:53:36.994597  172338 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:53:36.994975  172338 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 01:53:36.995042  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:53:36.995059  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:53:36.995069  172338 start_flags.go:323] config:
	{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Ex
posedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.995229  172338 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.997622  172338 out.go:177] * Starting control plane node newest-cni-133807 in cluster newest-cni-133807
	I0229 01:53:36.998696  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:53:36.998739  172338 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 01:53:36.998757  172338 cache.go:56] Caching tarball of preloaded images
	I0229 01:53:36.998845  172338 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:53:36.998863  172338 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 01:53:36.998993  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:36.999265  172338 start.go:365] acquiring machines lock for newest-cni-133807: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:53:36.999328  172338 start.go:369] acquired machines lock for "newest-cni-133807" in 34.294µs
	I0229 01:53:36.999350  172338 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:53:36.999359  172338 fix.go:54] fixHost starting: 
	I0229 01:53:36.999756  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.999804  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:37.014484  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0229 01:53:37.014854  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:37.015358  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:37.015380  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:37.015794  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:37.016017  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:37.016186  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:53:37.017841  172338 fix.go:102] recreateIfNeeded on newest-cni-133807: state=Stopped err=<nil>
	I0229 01:53:37.017866  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	W0229 01:53:37.018024  172338 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:53:37.019758  172338 out.go:177] * Restarting existing kvm2 VM for "newest-cni-133807" ...
	I0229 01:53:35.187854  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:37.188009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:35.706584  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:38.207259  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:36.771905  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.771929  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:36.771974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:36.795209  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.795242  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:36.795305  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:36.818025  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.818055  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:36.818111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:36.845202  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.845228  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:36.845238  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:36.845249  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:36.863710  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:36.863746  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:36.941560  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:36.941585  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:36.941599  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:36.985345  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:36.985374  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:37.049297  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:37.049331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:39.600693  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:39.614787  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:39.637491  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.637520  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:39.637579  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:39.655913  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.655934  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:39.655974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:39.673860  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.673884  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:39.673948  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:39.694282  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.694306  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:39.694362  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:39.713273  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.713298  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:39.713354  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:39.738601  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.738637  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:39.738694  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:39.757911  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.757946  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:39.758003  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:39.785844  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.785876  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:39.785889  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:39.785923  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:39.890021  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:39.890046  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:39.890063  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:39.946696  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:39.946738  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:40.011265  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:40.011294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:40.061033  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:40.061066  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:37.020899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Start
	I0229 01:53:37.021060  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring networks are active...
	I0229 01:53:37.021715  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network default is active
	I0229 01:53:37.022109  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network mk-newest-cni-133807 is active
	I0229 01:53:37.022542  172338 main.go:141] libmachine: (newest-cni-133807) Getting domain xml...
	I0229 01:53:37.023299  172338 main.go:141] libmachine: (newest-cni-133807) Creating domain...
	I0229 01:53:38.239149  172338 main.go:141] libmachine: (newest-cni-133807) Waiting to get IP...
	I0229 01:53:38.240362  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.240876  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.240965  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.240868  172372 retry.go:31] will retry after 275.310864ms: waiting for machine to come up
	I0229 01:53:38.517440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.518160  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.518185  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.518111  172372 retry.go:31] will retry after 317.329288ms: waiting for machine to come up
	I0229 01:53:38.836647  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.837248  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.837276  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.837187  172372 retry.go:31] will retry after 392.589727ms: waiting for machine to come up
	I0229 01:53:39.231732  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.232246  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.232285  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.232194  172372 retry.go:31] will retry after 424.503594ms: waiting for machine to come up
	I0229 01:53:39.658948  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.659654  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.659681  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.659612  172372 retry.go:31] will retry after 509.777965ms: waiting for machine to come up
	I0229 01:53:40.171487  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.172122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.172152  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.172076  172372 retry.go:31] will retry after 742.622621ms: waiting for machine to come up
	I0229 01:53:40.915896  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.916440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.916470  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.916388  172372 retry.go:31] will retry after 749.503001ms: waiting for machine to come up
	I0229 01:53:41.667865  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:41.668416  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:41.668460  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:41.668341  172372 retry.go:31] will retry after 899.624948ms: waiting for machine to come up
	I0229 01:53:39.686755  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:41.687219  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:40.705623  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.708440  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.579474  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:42.594968  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:42.614588  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.614619  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:42.614678  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:42.633590  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.633626  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:42.633675  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:42.650641  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.650670  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:42.650725  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:42.667825  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.667848  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:42.667896  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:42.687222  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.687250  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:42.687306  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:42.707192  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.707221  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:42.707283  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:42.727815  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.727842  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:42.727909  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:42.747315  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.747344  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:42.747358  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:42.747373  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:42.835128  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.835153  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:42.835166  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:42.878670  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:42.878704  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:42.938260  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:42.938295  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:42.988986  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:42.989023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.504852  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:45.519775  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:45.544878  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.544907  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:45.544956  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:45.564358  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.564392  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:45.564452  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:45.585154  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.585184  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:45.585248  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:45.605709  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.605739  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:45.605811  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:45.623803  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.623890  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:45.623962  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:45.643133  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.643164  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:45.643234  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:45.661762  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.661802  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:45.661861  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:45.680592  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.680620  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:45.680634  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:45.680649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:45.745642  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:45.745700  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:45.823069  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:45.823109  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:45.892445  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:45.892486  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.910297  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:45.910333  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:45.990129  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.569261  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:42.569902  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:42.569929  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:42.569879  172372 retry.go:31] will retry after 1.844906669s: waiting for machine to come up
	I0229 01:53:44.416650  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:44.417122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:44.417147  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:44.417082  172372 retry.go:31] will retry after 1.668166694s: waiting for machine to come up
	I0229 01:53:46.086877  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:46.087409  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:46.087439  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:46.087360  172372 retry.go:31] will retry after 2.357310139s: waiting for machine to come up
	I0229 01:53:44.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:46.187804  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.689109  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:45.205820  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:47.207153  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:49.207534  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.491272  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:48.505184  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:48.525599  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.525629  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:48.525706  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:48.546500  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.546532  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:48.546594  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:48.568626  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.568658  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:48.568721  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:48.587381  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.587414  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:48.587473  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:48.605940  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.605978  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:48.606036  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:48.627862  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.627939  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:48.627990  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:48.647290  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.647337  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:48.647409  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:48.668387  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.668421  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:48.668436  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:48.668465  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:48.749495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:48.749564  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:48.768497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:48.768537  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:48.851955  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:48.851986  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:48.852007  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:48.897006  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:48.897051  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:51.469648  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:51.483142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:51.505315  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.505336  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:51.505382  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:51.527266  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.527291  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:51.527349  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:51.549665  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.549695  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:51.549762  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:51.567017  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.567048  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:51.567115  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:51.584257  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.584283  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:51.584330  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:51.601100  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.601120  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:51.601162  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:51.617334  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.617364  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:51.617412  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:51.634847  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.634870  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:51.634884  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:51.634906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:51.699822  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:51.699852  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:51.699874  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:51.748726  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:51.748767  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:48.446918  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:48.447458  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:48.447486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:48.447405  172372 retry.go:31] will retry after 3.5649966s: waiting for machine to come up
	I0229 01:53:50.690417  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:53.186096  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.706757  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:54.207589  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.821091  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:51.821125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:51.870732  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:51.870762  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.385901  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:54.399480  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:54.417966  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.417996  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:54.418059  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:54.436602  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.436625  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:54.436671  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:54.454846  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.454871  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:54.454929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:54.475020  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.475052  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:54.475106  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:54.492090  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.492124  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:54.492179  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:54.508529  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.508552  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:54.508612  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:54.525505  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.525532  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:54.525592  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:54.542182  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.542205  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:54.542217  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:54.542231  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:54.591034  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:54.591075  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.607014  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:54.607059  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:54.673259  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:54.673277  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:54.673294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:54.735883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:54.735933  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:52.015966  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:52.016461  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:52.016486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:52.016421  172372 retry.go:31] will retry after 3.221741445s: waiting for machine to come up
	I0229 01:53:55.241903  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242455  172338 main.go:141] libmachine: (newest-cni-133807) Found IP for machine: 192.168.50.38
	I0229 01:53:55.242486  172338 main.go:141] libmachine: (newest-cni-133807) Reserving static IP address...
	I0229 01:53:55.242513  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has current primary IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.242982  172338 main.go:141] libmachine: (newest-cni-133807) Reserved static IP address: 192.168.50.38
	I0229 01:53:55.243002  172338 main.go:141] libmachine: (newest-cni-133807) DBG | skip adding static IP to network mk-newest-cni-133807 - found existing host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"}
	I0229 01:53:55.243021  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Getting to WaitForSSH function...
	I0229 01:53:55.243051  172338 main.go:141] libmachine: (newest-cni-133807) Waiting for SSH to be available...
	I0229 01:53:55.245263  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245602  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.245635  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245719  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH client type: external
	I0229 01:53:55.245756  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa (-rw-------)
	I0229 01:53:55.245815  172338 main.go:141] libmachine: (newest-cni-133807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:53:55.245837  172338 main.go:141] libmachine: (newest-cni-133807) DBG | About to run SSH command:
	I0229 01:53:55.245849  172338 main.go:141] libmachine: (newest-cni-133807) DBG | exit 0
	I0229 01:53:55.365823  172338 main.go:141] libmachine: (newest-cni-133807) DBG | SSH cmd err, output: <nil>: 
	I0229 01:53:55.366165  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetConfigRaw
	I0229 01:53:55.366733  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.369039  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369334  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.369365  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369634  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:55.369878  172338 machine.go:88] provisioning docker machine ...
	I0229 01:53:55.369899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.370074  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370280  172338 buildroot.go:166] provisioning hostname "newest-cni-133807"
	I0229 01:53:55.370305  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.372352  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372683  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.372714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372826  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.373050  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373221  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.373545  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.373765  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.373801  172338 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-133807 && echo "newest-cni-133807" | sudo tee /etc/hostname
	I0229 01:53:55.501380  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-133807
	
	I0229 01:53:55.501425  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.504532  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.504925  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.504953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.505203  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.505442  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505829  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.505993  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.506180  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.506197  172338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133807/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:53:55.627363  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:53:55.627403  172338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:53:55.627445  172338 buildroot.go:174] setting up certificates
	I0229 01:53:55.627465  172338 provision.go:83] configureAuth start
	I0229 01:53:55.627478  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.627799  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.630746  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631187  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.631216  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631361  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.633714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634069  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.634098  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634214  172338 provision.go:138] copyHostCerts
	I0229 01:53:55.634269  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:53:55.634288  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:53:55.634356  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:53:55.634447  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:53:55.634455  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:53:55.634478  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:53:55.634526  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:53:55.634534  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:53:55.634553  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:53:55.634601  172338 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133807 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube newest-cni-133807]
	I0229 01:53:55.739651  172338 provision.go:172] copyRemoteCerts
	I0229 01:53:55.739705  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:53:55.739730  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.742433  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.742797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.742821  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.743006  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.743211  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.743367  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.743503  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:55.825143  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:53:55.850150  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 01:53:55.873623  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:53:55.897271  172338 provision.go:86] duration metric: configureAuth took 269.790188ms
	I0229 01:53:55.897298  172338 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:53:55.897528  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:55.897558  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.897880  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.900413  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900726  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.900754  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900862  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.901029  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.901575  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.901796  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.901811  172338 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:53:56.003790  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:53:56.003817  172338 buildroot.go:70] root file system type: tmpfs
	I0229 01:53:56.003960  172338 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:53:56.003989  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.006912  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007266  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.007291  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007470  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.007629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007793  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.008184  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.008354  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.008418  172338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:53:56.124499  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:53:56.124533  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.127457  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127793  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.127829  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127968  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.128151  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128308  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128498  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.128680  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.128833  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.128852  172338 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:53:55.187275  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.189486  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:56.706921  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:59.205557  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.106913  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:53:57.106944  172338 machine.go:91] provisioned docker machine in 1.737051901s
	I0229 01:53:57.106958  172338 start.go:300] post-start starting for "newest-cni-133807" (driver="kvm2")
	I0229 01:53:57.106971  172338 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:53:57.106987  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.107348  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:53:57.107378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.109947  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110278  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.110306  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110419  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.110655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.110847  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.110998  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.195254  172338 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:53:57.199660  172338 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:53:57.199686  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:53:57.199749  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:53:57.199861  172338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:53:57.199978  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:53:57.211667  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:53:57.236009  172338 start.go:303] post-start completed in 129.030126ms
	I0229 01:53:57.236038  172338 fix.go:56] fixHost completed within 20.236678345s
	I0229 01:53:57.236066  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.239097  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239405  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.239428  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239632  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.239810  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.239990  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.240135  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.240351  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:57.240577  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:57.240592  172338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:53:57.347803  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171637.329083069
	
	I0229 01:53:57.347829  172338 fix.go:206] guest clock: 1709171637.329083069
	I0229 01:53:57.347839  172338 fix.go:219] Guest: 2024-02-29 01:53:57.329083069 +0000 UTC Remote: 2024-02-29 01:53:57.236042976 +0000 UTC m=+20.403256492 (delta=93.040093ms)
	I0229 01:53:57.347867  172338 fix.go:190] guest clock delta is within tolerance: 93.040093ms
	I0229 01:53:57.347875  172338 start.go:83] releasing machines lock for "newest-cni-133807", held for 20.348533837s
	I0229 01:53:57.347898  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.348162  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:57.350842  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351284  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.351312  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351648  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352219  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352485  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352599  172338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:53:57.352685  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.352765  172338 ssh_runner.go:195] Run: cat /version.json
	I0229 01:53:57.352801  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.355935  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356331  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356570  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.356571  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356764  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.356906  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356923  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.356930  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.357085  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.357144  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.357257  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.357402  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.357558  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.439867  172338 ssh_runner.go:195] Run: systemctl --version
	I0229 01:53:57.461722  172338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:53:57.469492  172338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:53:57.469553  172338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:53:57.488804  172338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:53:57.488832  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.488972  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.510573  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 01:53:57.522254  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:53:57.533175  172338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:53:57.533265  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:53:57.544648  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.556155  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:53:57.568806  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.579441  172338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:53:57.591000  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:53:57.602790  172338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:53:57.612548  172338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:53:57.622708  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:57.774983  172338 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:53:57.803366  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.803462  172338 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:53:57.819377  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.835552  172338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:53:57.855766  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.870321  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.882616  172338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:53:57.906767  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.919519  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.937892  172338 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:53:57.941557  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:53:57.950404  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:53:57.966732  172338 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:53:58.084501  172338 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:53:58.208172  172338 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:53:58.208327  172338 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:53:58.231616  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:58.339214  172338 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:53:59.877873  172338 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.53860785s)
	I0229 01:53:59.877980  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 01:53:59.892601  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:53:59.908111  172338 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 01:54:00.026741  172338 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 01:54:00.150989  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.270596  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 01:54:00.292845  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:54:00.310771  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.442177  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 01:54:00.520800  172338 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 01:54:00.520874  172338 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 01:54:00.527623  172338 start.go:543] Will wait 60s for crictl version
	I0229 01:54:00.527683  172338 ssh_runner.go:195] Run: which crictl
	I0229 01:54:00.532463  172338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:54:00.599208  172338 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 01:54:00.599291  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.627562  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.655024  172338 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 01:54:00.655069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:54:00.658010  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658343  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:00.658372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658608  172338 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 01:54:00.662943  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.679113  172338 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 01:53:57.304118  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:57.317352  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:57.334647  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.334674  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:57.334724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:57.354591  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.354620  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:57.354664  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:57.378535  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.378558  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:57.378613  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:57.398944  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.398973  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:57.399019  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:57.419479  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.419500  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:57.419544  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:57.435860  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.435888  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:57.435942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:57.453347  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.453383  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:57.453430  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:57.473140  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.473168  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:57.473182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:57.473196  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:57.526048  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:57.526079  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:57.541246  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:57.541271  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:57.616011  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:57.616037  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:57.616052  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:57.658815  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:57.658856  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.228028  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:00.242250  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:00.260188  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.260217  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:00.260277  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:00.279694  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.279717  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:00.279768  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:00.300245  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.300276  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:00.300331  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:00.321402  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.321423  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:00.321484  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:00.341221  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.341252  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:00.341309  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:00.359202  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.359228  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:00.359274  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:00.377486  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.377515  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:00.377566  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:00.396751  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.396780  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:00.396792  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:00.396804  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:00.411321  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:00.411354  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:00.486044  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:00.486070  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:00.486086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:00.533467  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:00.533493  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.601400  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:00.601429  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:00.680518  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:54:00.680595  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.699558  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.699582  172338 docker.go:615] Images already preloaded, skipping extraction
	I0229 01:54:00.699651  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.720362  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.720382  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:00.720435  172338 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:54:00.750538  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:00.750564  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:00.750582  172338 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 01:54:00.750604  172338 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133807 NodeName:newest-cni-133807 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:54:00.750845  172338 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-133807"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:54:00.750974  172338 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-133807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:54:00.751053  172338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 01:54:00.763338  172338 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:54:00.763421  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:54:00.774930  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0229 01:54:00.795559  172338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 01:54:00.816378  172338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0229 01:54:00.836392  172338 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0229 01:54:00.841301  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.855335  172338 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807 for IP: 192.168.50.38
	I0229 01:54:00.855370  172338 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:00.855555  172338 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:54:00.855595  172338 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:54:00.855699  172338 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/client.key
	I0229 01:54:00.855776  172338 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key.01da567d
	I0229 01:54:00.855837  172338 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key
	I0229 01:54:00.856003  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:54:00.856056  172338 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:54:00.856071  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:54:00.856107  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:54:00.856141  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:54:00.856172  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:54:00.856231  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:54:00.856935  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:54:00.884304  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:54:00.909114  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:54:00.932767  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:54:00.957174  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:54:00.982424  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:54:01.005673  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:54:01.029470  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:54:01.056951  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:54:01.080261  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:54:01.104850  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:54:01.128318  172338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:54:01.145321  172338 ssh_runner.go:195] Run: openssl version
	I0229 01:54:01.150792  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:54:01.162288  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166729  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166774  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.172237  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:54:01.183583  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:54:01.195364  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199820  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199890  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.205840  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:54:01.217694  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:54:01.229231  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233770  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233841  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.239419  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:54:01.250900  172338 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:54:01.255351  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:54:01.261364  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:54:01.267843  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:54:01.273917  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:54:01.279780  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:54:01.285722  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:54:01.295181  172338 kubeadm.go:404] StartCluster: {Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false sy
stem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:54:01.295318  172338 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:01.327657  172338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:54:01.340602  172338 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:54:01.340626  172338 kubeadm.go:636] restartCluster start
	I0229 01:54:01.340676  172338 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:54:01.351659  172338 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.352394  172338 kubeconfig.go:135] verify returned: extract IP: "newest-cni-133807" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:01.352778  172338 kubeconfig.go:146] "newest-cni-133807" context is missing from /home/jenkins/minikube-integration/18063-115328/kubeconfig - will repair!
	I0229 01:54:01.353471  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:01.354935  172338 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:54:01.365295  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.365346  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.379525  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.866175  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.866250  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.880632  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:53:59.689914  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.694344  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.208129  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.705473  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.160372  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:03.174216  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:03.193976  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.193997  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:03.194047  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:03.212210  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.212237  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:03.212282  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:03.229155  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.229178  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:03.229223  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:03.248201  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.248224  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:03.248287  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:03.267884  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.267908  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:03.267952  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:03.287746  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.287770  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:03.287821  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:03.306938  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.306967  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:03.307016  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:03.326486  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.326519  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:03.326534  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:03.326549  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:03.395132  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:03.395184  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:03.412879  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:03.412913  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:03.482097  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:03.482120  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:03.482132  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:03.525422  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:03.525455  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.083568  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:06.096663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:06.114370  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.114400  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:06.114445  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:06.131116  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.131136  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:06.131180  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:06.147183  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.147206  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:06.147261  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:06.163312  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.163335  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:06.163381  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:06.180224  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.180248  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:06.180302  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:06.197599  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.197627  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:06.197682  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:06.215691  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.215711  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:06.215756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:06.232575  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.232594  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:06.232606  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:06.232619  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:06.274143  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:06.274169  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.333535  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:06.333568  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:06.385263  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:06.385291  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:06.399965  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:06.399998  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:06.462490  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:02.365814  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.365888  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.381326  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:02.865848  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.865928  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.881269  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.365397  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.365478  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.380922  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.865482  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.865596  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.879430  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.366070  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.366183  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.381485  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.866086  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.866191  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.879535  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.366159  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.366268  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.379573  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.865791  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.865883  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.881058  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.365561  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.365642  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.379122  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.865926  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.879810  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.186274  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:06.187331  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.687316  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:05.705984  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:07.706819  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.962748  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:08.979756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:08.996761  170748 logs.go:276] 0 containers: []
	W0229 01:54:08.996786  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:08.996840  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:09.020061  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.020088  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:09.020144  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:09.042548  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.042578  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:09.042633  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:09.072428  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.072461  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:09.072525  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:09.089193  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.089216  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:09.089262  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:09.107143  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.107170  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:09.107220  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:09.125208  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.125228  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:09.125268  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:09.143488  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.143511  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:09.143522  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:09.143535  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:09.214360  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:09.214382  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:09.214395  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:09.256462  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:09.256492  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:09.312362  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:09.312392  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:09.362596  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:09.362630  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:07.365617  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.365729  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.379799  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:07.865347  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.865455  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.879417  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.366028  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.366123  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.380127  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.865702  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.865849  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.880014  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.365550  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.365632  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.382898  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.865431  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.865510  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.879281  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.365768  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.365864  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.380308  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.865941  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.879469  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.366107  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:11.366212  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:11.380134  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.380168  172338 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 01:54:11.380204  172338 kubeadm.go:1135] stopping kube-system containers ...
	I0229 01:54:11.380272  172338 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:11.400551  172338 docker.go:483] Stopping containers: [b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292]
	I0229 01:54:11.400620  172338 ssh_runner.go:195] Run: docker stop b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292
	I0229 01:54:11.420276  172338 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 01:54:11.442755  172338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:11.452745  172338 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:11.452816  172338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462724  172338 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462746  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:11.576479  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:10.687632  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:13.188979  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:09.707636  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:12.206349  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:14.206598  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:11.880988  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:11.894918  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:11.915749  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.915777  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:11.915837  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:11.933269  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.933295  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:11.933388  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:11.950460  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.950483  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:11.950530  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:11.966919  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.966943  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:11.967004  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:11.987487  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.987519  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:11.987602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:12.011234  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.011265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:12.011324  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:12.039057  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.039083  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:12.039140  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:12.062016  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.062047  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:12.062061  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:12.062078  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.116706  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:12.116744  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:12.176126  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:12.176156  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:12.234175  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:12.234210  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:12.249559  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:12.249597  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:12.321806  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:14.822521  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.837453  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:14.857687  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.857723  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:14.857804  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:14.879933  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.879966  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:14.880025  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:14.903296  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.903334  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:14.903477  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:14.924603  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.924635  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:14.924697  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:14.943135  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.943159  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:14.943218  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:14.961231  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.961265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:14.961326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:14.993744  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.993786  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:14.993857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:15.013656  170748 logs.go:276] 0 containers: []
	W0229 01:54:15.013686  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:15.013700  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:15.013714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:15.092540  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:15.092576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:15.162362  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:15.162406  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:15.178584  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:15.178612  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:15.256534  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:15.256560  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:15.256576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.722918  172338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146406214s)
	I0229 01:54:12.722946  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:12.927585  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.040907  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.139301  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:13.139384  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:13.640506  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.139790  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.640206  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.663070  172338 api_server.go:72] duration metric: took 1.523766735s to wait for apiserver process to appear ...
	I0229 01:54:14.663104  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:14.663126  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:14.663675  172338 api_server.go:269] stopped: https://192.168.50.38:8443/healthz: Get "https://192.168.50.38:8443/healthz": dial tcp 192.168.50.38:8443: connect: connection refused
	I0229 01:54:15.163277  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:15.190654  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.686359  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:16.207410  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:18.705701  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.942183  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.942214  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:17.942230  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:17.987284  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.987321  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:18.163519  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.168857  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.168891  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:18.663488  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.668213  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.668238  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.163425  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.171029  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:19.171065  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.664211  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.668342  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:19.675820  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:19.675849  172338 api_server.go:131] duration metric: took 5.012736256s to wait for apiserver health ...
	I0229 01:54:19.675858  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:19.675869  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:19.677686  172338 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 01:54:19.678985  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 01:54:19.690408  172338 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 01:54:19.711239  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:19.720671  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:19.720701  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:19.720709  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:19.720715  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:19.720723  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:19.720731  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 01:54:19.720736  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:19.720741  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:19.720761  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:19.720767  172338 system_pods.go:74] duration metric: took 9.509631ms to wait for pod list to return data ...
	I0229 01:54:19.720776  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:19.724321  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:19.724346  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:19.724358  172338 node_conditions.go:105] duration metric: took 3.577361ms to run NodePressure ...
	I0229 01:54:19.724376  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:20.003533  172338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:54:20.017015  172338 ops.go:34] apiserver oom_adj: -16
	I0229 01:54:20.017041  172338 kubeadm.go:640] restartCluster took 18.676407847s
	I0229 01:54:20.017053  172338 kubeadm.go:406] StartCluster complete in 18.721880164s
	I0229 01:54:20.017075  172338 settings.go:142] acquiring lock: {Name:mk324b2a181b324166fa2d8da3ad5d1101ca0339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.017158  172338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:20.018872  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.019139  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:54:20.019351  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019320  172338 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 01:54:20.019413  172338 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-133807"
	I0229 01:54:20.019429  172338 addons.go:69] Setting default-storageclass=true in profile "newest-cni-133807"
	I0229 01:54:20.019437  172338 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-133807"
	W0229 01:54:20.019445  172338 addons.go:243] addon storage-provisioner should already be in state true
	I0229 01:54:20.019445  172338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133807"
	I0229 01:54:20.019429  172338 cache.go:107] acquiring lock: {Name:mkf83f87b4b5efd9201d385629e40dc6af5715f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:54:20.019496  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019509  172338 cache.go:115] /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 01:54:20.019520  172338 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 106.029µs
	I0229 01:54:20.019530  172338 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 01:54:20.019528  172338 addons.go:69] Setting metrics-server=true in profile "newest-cni-133807"
	I0229 01:54:20.019539  172338 cache.go:87] Successfully saved all images to host disk.
	I0229 01:54:20.019551  172338 addons.go:234] Setting addon metrics-server=true in "newest-cni-133807"
	W0229 01:54:20.019561  172338 addons.go:243] addon metrics-server should already be in state true
	I0229 01:54:20.019604  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019735  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.019930  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020002  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020042  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020045  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020109  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020138  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020260  172338 addons.go:69] Setting dashboard=true in profile "newest-cni-133807"
	I0229 01:54:20.020302  172338 addons.go:234] Setting addon dashboard=true in "newest-cni-133807"
	W0229 01:54:20.020310  172338 addons.go:243] addon dashboard should already be in state true
	I0229 01:54:20.020476  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.020937  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.021009  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.029773  172338 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-133807" context rescaled to 1 replicas
	I0229 01:54:20.029823  172338 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:54:20.031663  172338 out.go:177] * Verifying Kubernetes components...
	I0229 01:54:20.033048  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:20.041914  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0229 01:54:20.041918  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0229 01:54:20.041966  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0229 01:54:20.041928  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0229 01:54:20.042220  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0229 01:54:20.042451  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042454  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042924  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043005  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043019  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043030  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043044  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043051  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043098  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043401  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043418  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043428  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.043543  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043555  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043558  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043567  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.044095  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044134  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.044332  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044374  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044404  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044425  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044925  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044970  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.045173  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045588  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.045633  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.047760  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.047785  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.049100  172338 addons.go:234] Setting addon default-storageclass=true in "newest-cni-133807"
	W0229 01:54:20.049123  172338 addons.go:243] addon default-storageclass should already be in state true
	I0229 01:54:20.049152  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.049548  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.049584  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.064541  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0229 01:54:20.065017  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065158  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I0229 01:54:20.065470  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065736  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065747  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.065986  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065997  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.066225  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066313  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066403  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.066481  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.068564  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0229 01:54:20.068997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.069067  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.069072  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.071190  172338 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 01:54:20.069506  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.072655  172338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:54:20.072680  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.074227  172338 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.074244  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:54:20.074265  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.072649  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 01:54:20.074288  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 01:54:20.074310  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.074704  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.074919  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.075229  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:20.075252  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.078346  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079734  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.079764  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080050  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.080073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080531  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080805  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.080854  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.081053  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081112  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081357  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081683  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081913  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0229 01:54:20.082210  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.082371  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082386  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0229 01:54:20.082793  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082934  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.082954  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.083003  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083017  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083155  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.083315  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083325  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.083400  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083505  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.083661  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083828  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.083874  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.083905  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.084097  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.085520  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.087522  172338 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 01:54:20.088944  172338 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 01:54:17.803447  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:17.818754  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:17.838257  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.838289  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:17.838351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:17.859095  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.859128  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:17.859188  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:17.880186  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.880219  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:17.880281  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:17.905367  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.905415  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:17.905476  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:17.926888  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.926913  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:17.926974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:17.948858  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.948884  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:17.948941  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:17.967835  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.967871  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:17.967930  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:17.999903  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.999935  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:17.999949  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:17.999963  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:18.066021  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:18.066065  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:18.091596  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:18.091621  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:18.167407  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:18.167429  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:18.167444  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:18.212978  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:18.213013  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:20.785493  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.802351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:20.825685  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.825720  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:20.825770  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:20.849013  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.849043  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:20.849111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:20.871166  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.871198  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:20.871249  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:20.889932  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.889963  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:20.890022  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:20.912390  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.912416  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:20.912492  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:20.931206  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.931233  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:20.931291  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:20.949663  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.949687  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:20.949739  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:20.967249  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.967277  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:20.967288  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:20.967299  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:21.062400  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:21.062428  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:21.062445  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:21.113883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:21.113924  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:21.180620  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:21.180659  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:21.236555  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:21.236589  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:20.090259  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 01:54:20.090273  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 01:54:20.090286  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.092728  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093153  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.093186  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093317  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.093479  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.093618  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.093732  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.118803  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0229 01:54:20.119213  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.119796  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.119825  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.120194  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.120440  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.121995  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.122309  172338 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.122327  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:54:20.122352  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.124725  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125104  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.125126  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.125513  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.125629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.125721  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.333837  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 01:54:20.333867  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 01:54:20.365581  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 01:54:20.365605  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 01:54:20.387559  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 01:54:20.387585  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 01:54:20.391190  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.394118  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.442370  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 01:54:20.442407  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 01:54:20.466973  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 01:54:20.467005  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 01:54:20.489843  172338 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 01:54:20.489843  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:20.489919  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:20.489940  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:20.489947  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.489953  172338 cache_images.go:262] succeeded pushing to: newest-cni-133807
	I0229 01:54:20.489960  172338 cache_images.go:263] failed pushing to: 
	I0229 01:54:20.489991  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490005  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490309  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490327  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.490335  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490342  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490620  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:20.490605  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490643  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.507250  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.507271  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 01:54:20.529738  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.572814  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 01:54:20.572836  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 01:54:20.614903  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 01:54:20.614929  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 01:54:20.698112  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 01:54:20.698133  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 01:54:20.767402  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 01:54:20.767429  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 01:54:20.833849  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 01:54:20.833880  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 01:54:20.894077  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:20.894100  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 01:54:20.947725  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:21.834822  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.440658264s)
	I0229 01:54:21.834862  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.443647567s)
	I0229 01:54:21.834881  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.834882  172338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.344911071s)
	I0229 01:54:21.834935  172338 api_server.go:72] duration metric: took 1.805074704s to wait for apiserver process to appear ...
	I0229 01:54:21.834954  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:21.834975  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:21.834886  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.834904  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835393  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835415  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835425  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835429  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835443  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835456  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835468  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835479  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835433  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835847  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835856  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835859  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835862  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835868  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835874  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.843384  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:21.844033  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.844056  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.844319  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.844354  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.844370  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.844766  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:21.844804  172338 api_server.go:131] duration metric: took 9.827817ms to wait for apiserver health ...
	I0229 01:54:21.844815  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:21.851946  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:21.851980  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:21.851990  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:21.852004  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:21.852013  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:21.852024  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running
	I0229 01:54:21.852032  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:21.852042  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:21.852052  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:21.852063  172338 system_pods.go:74] duration metric: took 7.238252ms to wait for pod list to return data ...
	I0229 01:54:21.852075  172338 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:54:21.855974  172338 default_sa.go:45] found service account: "default"
	I0229 01:54:21.856003  172338 default_sa.go:55] duration metric: took 3.916391ms for default service account to be created ...
	I0229 01:54:21.856020  172338 kubeadm.go:581] duration metric: took 1.826163486s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 01:54:21.856046  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:21.858351  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:21.858367  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:21.858377  172338 node_conditions.go:105] duration metric: took 2.326102ms to run NodePressure ...
	I0229 01:54:21.858387  172338 start.go:228] waiting for startup goroutines ...
	I0229 01:54:21.896983  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.367194081s)
	I0229 01:54:21.897048  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897070  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897356  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897372  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897386  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897669  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897686  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897701  172338 addons.go:470] Verifying addon metrics-server=true in "newest-cni-133807"
	I0229 01:54:22.315002  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.367214151s)
	I0229 01:54:22.315099  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315119  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315448  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315472  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315488  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.315512  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315524  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315830  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315843  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.317416  172338 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-133807 addons enable metrics-server
	
	I0229 01:54:22.318943  172338 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 01:54:22.320494  172338 addons.go:505] enable addons completed in 2.301194216s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 01:54:22.320539  172338 start.go:233] waiting for cluster config update ...
	I0229 01:54:22.320554  172338 start.go:242] writing updated cluster config ...
	I0229 01:54:22.320879  172338 ssh_runner.go:195] Run: rm -f paused
	I0229 01:54:22.378739  172338 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:54:22.380459  172338 out.go:177] * Done! kubectl is now configured to use "newest-cni-133807" cluster and "default" namespace by default
	I0229 01:54:19.687767  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:21.689355  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:20.707480  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:22.707979  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:23.754280  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:23.768586  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:23.793150  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.793172  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:23.793221  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:23.818865  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.818896  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:23.818949  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:23.838078  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.838105  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:23.838161  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:23.859213  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.859235  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:23.859279  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:23.878876  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.878901  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:23.878938  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:23.899317  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.899340  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:23.899387  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:23.916826  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.916851  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:23.916891  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:23.933713  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.933739  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:23.933752  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:23.933766  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:24.003099  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:24.003136  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:24.021001  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:24.021038  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:24.097013  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:24.097035  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:24.097050  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:24.145682  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:24.145714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:26.710373  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:26.724077  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:26.740532  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.740556  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:26.740603  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:24.187991  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.188081  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:28.688297  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:24.708094  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:27.205437  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:29.206577  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.758229  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.758251  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:26.758294  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:26.774881  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.774904  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:26.774971  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:26.790893  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.790913  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:26.790953  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:26.807273  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.807300  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:26.807359  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:26.824081  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.824107  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:26.824165  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:26.840770  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.840793  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:26.840851  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:26.856932  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.856966  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:26.856980  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:26.856995  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:26.907299  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:26.907331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:26.922552  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:26.922585  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:26.999079  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:26.999109  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:26.999125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:27.051061  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:27.051098  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:29.607727  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:29.622929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:29.641829  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.641861  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:29.641932  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:29.658732  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.658761  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:29.658825  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:29.676597  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.676619  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:29.676663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:29.695001  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.695030  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:29.695089  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:29.711947  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.711982  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:29.712038  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:29.728832  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.728860  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:29.728925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:29.744888  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.744907  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:29.744951  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:29.761144  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.761169  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:29.761182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:29.761192  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:29.810791  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:29.810823  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:29.824497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:29.824527  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:29.890825  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:29.890849  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:29.890865  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:29.934980  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:29.935023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:31.187022  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.686489  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:31.210173  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.705583  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:32.508161  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:32.523715  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:32.541751  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.541796  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:32.541860  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:32.559746  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.559772  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:32.559826  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:32.578867  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.578890  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:32.578942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:32.596025  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.596050  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:32.596104  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:32.613250  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.613277  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:32.613326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:32.629760  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.629808  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:32.629867  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:32.646940  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.646962  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:32.647034  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:32.666140  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.666167  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:32.666180  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:32.666194  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:32.718171  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:32.718206  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:32.732695  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:32.732720  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:32.796621  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:32.796642  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:32.796657  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:32.839872  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:32.839908  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:35.396632  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:35.412053  170748 kubeadm.go:640] restartCluster took 4m11.905401704s
	W0229 01:54:35.412153  170748 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 01:54:35.412183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:54:35.838651  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:35.854409  170748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:35.865129  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:35.875642  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:35.875696  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:54:36.022349  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:54:36.059938  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:54:36.131386  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:54:36.188327  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.686993  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:36.207432  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.706396  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.687792  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.188499  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.708268  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.206459  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.686549  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.689009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.705669  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.705839  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.187643  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.193029  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.205484  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.205628  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.205895  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.686931  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:57.185865  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:56.206104  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:58.707011  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:59.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:01.188066  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.687015  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:00.709471  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.205172  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:06.187463  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:08.686768  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:05.206413  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:07.706024  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:11.187247  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:13.686761  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:10.205156  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:12.205766  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:15.688395  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:18.186256  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:14.705829  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:17.206857  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:20.186585  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:22.186702  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:19.704997  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:21.706261  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:23.707958  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:24.187221  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.187591  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.687260  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.206739  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.705765  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.687620  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:32.688592  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.706982  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:33.208209  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.692999  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:37.189729  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.705863  169202 pod_ready.go:81] duration metric: took 4m0.00680066s waiting for pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace to be "Ready" ...
	E0229 01:55:34.705886  169202 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:55:34.705893  169202 pod_ready.go:38] duration metric: took 4m1.59715045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:55:34.705912  169202 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:55:34.705982  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:34.727306  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:34.727390  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:34.745657  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:34.745730  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:34.763604  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:34.763681  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:34.784535  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:34.784611  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:34.802288  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:34.802358  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:34.821502  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:34.821576  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:34.838522  169202 logs.go:276] 0 containers: []
	W0229 01:55:34.838548  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:34.838600  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:34.855799  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:34.855896  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:34.872982  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:34.873012  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:34.873023  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:34.895617  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:34.895647  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:34.915617  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:34.915645  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:34.989082  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:34.989112  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:35.017467  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:35.017495  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:35.046564  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:35.046591  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:35.068469  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:35.068499  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:35.098606  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:35.098636  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:35.125553  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:35.125589  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:35.171952  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:35.171993  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:35.233201  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:35.233241  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:35.291798  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.292005  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298118  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298323  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.321468  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:35.321511  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:35.338552  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:35.338582  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:35.453569  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453597  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:35.453663  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:35.453677  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453687  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453703  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453716  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.453727  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453740  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:39.687296  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:42.187476  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:44.189760  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:46.686245  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:48.687170  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:45.455294  169202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:55:45.470848  169202 api_server.go:72] duration metric: took 4m14.039378333s to wait for apiserver process to appear ...
	I0229 01:55:45.470876  169202 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:55:45.470953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:45.489614  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:45.489694  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:45.507881  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:45.507953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:45.540532  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:45.540609  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:45.560035  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:45.560134  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:45.579280  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:45.579376  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:45.597768  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:45.597865  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:45.618789  169202 logs.go:276] 0 containers: []
	W0229 01:55:45.618814  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:45.618860  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:45.638075  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:45.638159  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:45.656571  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:45.656611  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:45.656627  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:45.686218  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:45.686254  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:45.709338  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:45.709370  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:45.755652  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:45.755689  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:45.822848  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:45.822883  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:45.879421  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.879584  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885205  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885368  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:45.906780  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:45.906805  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:45.922651  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:45.922688  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:45.956685  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:45.956715  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:45.980079  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:45.980108  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:46.000800  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:46.000828  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:46.059443  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:46.059478  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:46.157674  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:46.157708  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:46.179678  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:46.179710  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:46.225916  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.225953  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:46.226025  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:46.226043  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226051  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226062  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226068  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:46.226077  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.226084  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:51.187510  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:53.686827  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.187244  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:58.686099  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.228095  169202 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0229 01:55:56.232840  169202 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0229 01:55:56.233957  169202 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:55:56.233979  169202 api_server.go:131] duration metric: took 10.763095955s to wait for apiserver health ...
	I0229 01:55:56.233988  169202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:55:56.234055  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:56.257140  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:56.257221  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:56.286172  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:56.286263  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:56.305014  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:56.305084  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:56.326712  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:56.326787  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:56.347079  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:56.347145  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:56.367625  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:56.367692  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:56.385387  169202 logs.go:276] 0 containers: []
	W0229 01:55:56.385431  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:56.385480  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:56.403032  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:56.403097  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:56.422016  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:56.422055  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:56.422072  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:56.444017  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:56.444045  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:56.473118  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:56.473151  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:56.518781  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:56.518819  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:56.542772  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:56.542814  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:56.604186  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.604348  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611644  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611847  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:56.635056  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:56.635088  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:56.649472  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:56.649496  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:56.763663  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:56.763696  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:56.793607  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:56.793638  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:56.857562  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:56.857597  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:56.924313  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:56.924343  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:56.962407  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:56.962436  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:56.985427  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:56.985458  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:57.007649  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007675  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:57.007729  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:57.007740  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007748  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007760  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007769  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:57.007777  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007785  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:00.687363  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:03.187734  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.019205  169202 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:07.019240  169202 system_pods.go:61] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.019246  169202 system_pods.go:61] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.019252  169202 system_pods.go:61] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.019257  169202 system_pods.go:61] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.019262  169202 system_pods.go:61] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.019266  169202 system_pods.go:61] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.019275  169202 system_pods.go:61] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.019281  169202 system_pods.go:61] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.019292  169202 system_pods.go:74] duration metric: took 10.78529776s to wait for pod list to return data ...
	I0229 01:56:07.019300  169202 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:07.021795  169202 default_sa.go:45] found service account: "default"
	I0229 01:56:07.021822  169202 default_sa.go:55] duration metric: took 2.513891ms for default service account to be created ...
	I0229 01:56:07.021833  169202 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:07.027968  169202 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:07.027991  169202 system_pods.go:89] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.027999  169202 system_pods.go:89] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.028006  169202 system_pods.go:89] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.028012  169202 system_pods.go:89] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.028021  169202 system_pods.go:89] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.028028  169202 system_pods.go:89] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.028044  169202 system_pods.go:89] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.028053  169202 system_pods.go:89] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.028065  169202 system_pods.go:126] duration metric: took 6.224923ms to wait for k8s-apps to be running ...
	I0229 01:56:07.028076  169202 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:07.028144  169202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:07.043579  169202 system_svc.go:56] duration metric: took 15.495808ms WaitForService to wait for kubelet.
	I0229 01:56:07.043608  169202 kubeadm.go:581] duration metric: took 4m35.612143208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:07.043638  169202 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:07.046428  169202 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:07.046447  169202 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:07.046457  169202 node_conditions.go:105] duration metric: took 2.814262ms to run NodePressure ...
	I0229 01:56:07.046469  169202 start.go:228] waiting for startup goroutines ...
	I0229 01:56:07.046475  169202 start.go:233] waiting for cluster config update ...
	I0229 01:56:07.046485  169202 start.go:242] writing updated cluster config ...
	I0229 01:56:07.046741  169202 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:07.095609  169202 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:56:07.097736  169202 out.go:177] * Done! kubectl is now configured to use "no-preload-449532" cluster and "default" namespace by default
	I0229 01:56:05.188374  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.188627  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:09.688264  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:12.188346  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:14.686751  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:16.687139  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:18.187973  169852 pod_ready.go:81] duration metric: took 4m0.008139239s waiting for pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace to be "Ready" ...
	E0229 01:56:18.187998  169852 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:56:18.188006  169852 pod_ready.go:38] duration metric: took 4m0.805438302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:56:18.188024  169852 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:56:18.188086  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:18.208854  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:18.208946  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:18.227659  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:18.227750  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:18.246475  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:18.246552  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:18.268583  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:18.268661  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:18.287872  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:18.287962  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:18.306446  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:18.306527  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:18.325914  169852 logs.go:276] 0 containers: []
	W0229 01:56:18.325943  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:18.325996  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:18.345838  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:18.345948  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:18.365691  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:18.365744  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:18.365763  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:18.390529  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:18.390558  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:18.441681  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:18.441715  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:18.521769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:18.522020  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:18.546113  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:18.546149  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:18.564900  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:18.564934  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:18.713864  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:18.713900  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:18.751902  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:18.752004  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:18.798480  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:18.798507  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:18.845423  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:18.845452  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:18.873120  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:18.873144  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:18.898180  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:18.898209  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:18.920066  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:18.920097  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:18.991663  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:18.991695  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:19.060048  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060079  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:19.060145  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:19.060170  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:19.060184  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:19.060198  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060209  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.235880  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:56:32.236029  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:56:32.238423  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:56:32.238502  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:56:32.238599  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:56:32.238744  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:56:32.238904  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:56:32.239073  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:56:32.239200  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:56:32.239271  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:56:32.239350  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:56:32.241126  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:56:32.241192  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:56:32.241251  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:56:32.241317  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:56:32.241394  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:56:32.241469  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:56:32.241523  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:56:32.241605  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:56:32.241700  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:56:32.241811  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:56:32.241921  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:56:32.242001  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:56:32.242081  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:56:32.242164  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:56:32.242247  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:56:32.242344  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:56:32.242427  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:56:32.242484  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:56:29.061463  169852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:56:29.077717  169852 api_server.go:72] duration metric: took 4m14.467720845s to wait for apiserver process to appear ...
	I0229 01:56:29.077739  169852 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:56:29.077840  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:29.096876  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:29.096961  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:29.114345  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:29.114423  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:29.131634  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:29.131705  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:29.149068  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:29.149139  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:29.166411  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:29.166483  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:29.182906  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:29.182982  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:29.199536  169852 logs.go:276] 0 containers: []
	W0229 01:56:29.199556  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:29.199599  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:29.218889  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:29.218951  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:29.237207  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:29.237245  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:29.237258  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:29.253233  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:29.253267  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:29.379843  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:29.379871  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:29.411795  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:29.411822  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:29.438557  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:29.438583  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:29.459479  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:29.459505  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:29.507590  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:29.507620  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:29.573263  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.573453  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.595549  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:29.595574  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:29.637026  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:29.637058  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:29.658572  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:29.658603  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:29.683814  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:29.683844  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:29.705482  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:29.705511  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:29.768497  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:29.768531  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:29.836247  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836270  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:29.836320  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:29.836331  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.836339  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.836350  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836360  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.244633  170748 out.go:204]   - Booting up control plane ...
	I0229 01:56:32.244727  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:56:32.244807  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:56:32.244884  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:56:32.244992  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:56:32.245189  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:56:32.245267  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:56:32.245360  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245532  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245599  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245746  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245826  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245998  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246093  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246273  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246359  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246574  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246588  170748 kubeadm.go:322] 
	I0229 01:56:32.246630  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:56:32.246679  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:56:32.246693  170748 kubeadm.go:322] 
	I0229 01:56:32.246740  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:56:32.246791  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:56:32.246892  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:56:32.246905  170748 kubeadm.go:322] 
	I0229 01:56:32.247026  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:56:32.247072  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:56:32.247116  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:56:32.247124  170748 kubeadm.go:322] 
	I0229 01:56:32.247212  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:56:32.247289  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:56:32.247361  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:56:32.247406  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:56:32.247488  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:56:32.247541  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 01:56:32.247677  170748 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:56:32.247732  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:56:32.689675  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:32.704123  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:56:32.713829  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:56:32.713881  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:56:32.847290  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:56:32.879658  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:56:32.959513  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:56:39.838133  169852 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8444/healthz ...
	I0229 01:56:39.843637  169852 api_server.go:279] https://192.168.72.56:8444/healthz returned 200:
	ok
	I0229 01:56:39.844896  169852 api_server.go:141] control plane version: v1.28.4
	I0229 01:56:39.844921  169852 api_server.go:131] duration metric: took 10.767174552s to wait for apiserver health ...
	I0229 01:56:39.844930  169852 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:56:39.845005  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:39.867188  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:39.867264  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:39.890265  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:39.890345  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:39.911540  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:39.911617  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:39.939266  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:39.939340  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:39.957270  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:39.957337  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:39.974956  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:39.975025  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:39.991794  169852 logs.go:276] 0 containers: []
	W0229 01:56:39.991815  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:39.991856  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:40.009143  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:40.009208  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:40.026359  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:40.026392  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:40.026406  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:40.046944  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:40.046969  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:40.067580  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:40.067604  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:40.091791  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:40.091812  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:40.151587  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:40.151619  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:40.221769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.221978  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.247432  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:40.247466  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:40.283196  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:40.283227  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:40.305677  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:40.305703  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:40.333975  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:40.334003  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:40.402520  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:40.402558  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:40.418892  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:40.418926  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:40.537554  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:40.537597  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:40.576026  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:40.576067  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:40.622017  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622055  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:40.622123  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:40.622137  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.622147  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.622165  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622178  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:50.632890  169852 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:50.632919  169852 system_pods.go:61] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.632924  169852 system_pods.go:61] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.632929  169852 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.632933  169852 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.632936  169852 system_pods.go:61] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.632939  169852 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.632944  169852 system_pods.go:61] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.632948  169852 system_pods.go:61] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.632955  169852 system_pods.go:74] duration metric: took 10.788019346s to wait for pod list to return data ...
	I0229 01:56:50.632961  169852 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:50.636262  169852 default_sa.go:45] found service account: "default"
	I0229 01:56:50.636279  169852 default_sa.go:55] duration metric: took 3.313291ms for default service account to be created ...
	I0229 01:56:50.636292  169852 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:50.641677  169852 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:50.641698  169852 system_pods.go:89] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.641704  169852 system_pods.go:89] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.641710  169852 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.641714  169852 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.641718  169852 system_pods.go:89] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.641722  169852 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.641730  169852 system_pods.go:89] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.641736  169852 system_pods.go:89] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.641743  169852 system_pods.go:126] duration metric: took 5.445558ms to wait for k8s-apps to be running ...
	I0229 01:56:50.641749  169852 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:50.641806  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:50.660446  169852 system_svc.go:56] duration metric: took 18.690637ms WaitForService to wait for kubelet.
	I0229 01:56:50.660469  169852 kubeadm.go:581] duration metric: took 4m36.05047851s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:50.660486  169852 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:50.663507  169852 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:50.663526  169852 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:50.663537  169852 node_conditions.go:105] duration metric: took 3.04635ms to run NodePressure ...
	I0229 01:56:50.663547  169852 start.go:228] waiting for startup goroutines ...
	I0229 01:56:50.663552  169852 start.go:233] waiting for cluster config update ...
	I0229 01:56:50.663561  169852 start.go:242] writing updated cluster config ...
	I0229 01:56:50.663826  169852 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:50.710751  169852 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:56:50.712950  169852 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-308557" cluster and "default" namespace by default
	I0229 01:58:29.528786  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:58:29.528884  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:58:29.530491  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:58:29.530596  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:58:29.530680  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:58:29.530764  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:58:29.530861  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:58:29.530964  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:58:29.531068  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:58:29.531119  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:58:29.531176  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:58:29.532944  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:58:29.533047  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:58:29.533144  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:58:29.533247  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:58:29.533305  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:58:29.533379  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:58:29.533441  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:58:29.533506  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:58:29.533567  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:58:29.533636  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:58:29.533700  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:58:29.533744  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:58:29.533806  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:58:29.533878  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:58:29.533967  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:58:29.534067  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:58:29.534153  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:58:29.534217  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:58:29.535694  170748 out.go:204]   - Booting up control plane ...
	I0229 01:58:29.535778  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:58:29.535844  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:58:29.535904  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:58:29.535972  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:58:29.536127  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:58:29.536212  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:58:29.536285  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536458  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536538  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536729  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536791  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536941  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537007  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537189  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537267  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537495  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537513  170748 kubeadm.go:322] 
	I0229 01:58:29.537569  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:58:29.537626  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:58:29.537636  170748 kubeadm.go:322] 
	I0229 01:58:29.537685  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:58:29.537744  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:58:29.537903  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:58:29.537915  170748 kubeadm.go:322] 
	I0229 01:58:29.538065  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:58:29.538113  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:58:29.538174  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:58:29.538183  170748 kubeadm.go:322] 
	I0229 01:58:29.538325  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:58:29.538450  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:58:29.538581  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:58:29.538656  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:58:29.538743  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:58:29.538829  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:58:29.538866  170748 kubeadm.go:406] StartCluster complete in 8m6.061536028s
	I0229 01:58:29.538947  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:58:29.556117  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.556141  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:58:29.556205  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:58:29.572791  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.572812  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:58:29.572857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:58:29.589544  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.589565  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:58:29.589625  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:58:29.605410  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.605426  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:58:29.605472  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:58:29.621393  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.621412  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:58:29.621450  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:58:29.637671  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.637690  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:58:29.637732  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:58:29.653501  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.653533  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:58:29.653590  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:58:29.669033  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.669058  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:58:29.669072  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:58:29.669086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:58:29.722126  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:58:29.722161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:58:29.735919  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:58:29.735946  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:58:29.803585  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:58:29.803615  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:58:29.803629  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:58:29.843153  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:58:29.843183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 01:58:29.906091  170748 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:58:29.906150  170748 out.go:239] * 
	W0229 01:58:29.906209  170748 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.906231  170748 out.go:239] * 
	W0229 01:58:29.906995  170748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:58:29.910220  170748 out.go:177] 
	W0229 01:58:29.911536  170748 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.911581  170748 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:58:29.911600  170748 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:58:29.912937  170748 out.go:177] 
	
	
	==> Docker <==
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776150999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776206246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776256438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776308167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776347865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776476626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776540257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776622510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776676461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776885278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776965976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777030325Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777311132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777539525Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777641426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777854491Z" level=info msg="containerd successfully booted in 0.034774s"
	Feb 29 01:50:21 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:21.976247648Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.012708683Z" level=info msg="Loading containers: start."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.140588585Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.193875502Z" level=info msg="Loading containers: done."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209172228Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209243974Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241102168Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241236205Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:50:22 old-k8s-version-096771 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-29T01:58:30Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 01:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 01:50] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +2.425571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914813] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.071671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054332] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.114259] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.335012] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.127181] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +0.149601] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +5.311700] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.076969] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.064548] systemd-fstab-generator[1441]: Ignoring "noauto" option for root device
	[  +0.060768] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 01:54] systemd-fstab-generator[9475]: Ignoring "noauto" option for root device
	[  +0.059471] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 01:56] systemd-fstab-generator[11246]: Ignoring "noauto" option for root device
	[  +0.070220] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:58:31 up 8 min,  0 users,  load average: 0.69, 0.27, 0.12
	Linux old-k8s-version-096771 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 01:58:29 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 01:58:29 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Feb 29 01:58:29 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 01:58:29 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: I0229 01:58:30.045687   12942 server.go:410] Version: v1.16.0
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: I0229 01:58:30.046302   12942 plugins.go:100] No cloud provider specified.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: I0229 01:58:30.046357   12942 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: I0229 01:58:30.050434   12942 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: W0229 01:58:30.051582   12942 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: W0229 01:58:30.051706   12942 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12942]: F0229 01:58:30.052611   12942 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: I0229 01:58:30.763198   12965 server.go:410] Version: v1.16.0
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: I0229 01:58:30.763953   12965 plugins.go:100] No cloud provider specified.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: I0229 01:58:30.764054   12965 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: I0229 01:58:30.766464   12965 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: W0229 01:58:30.768835   12965 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: W0229 01:58:30.768993   12965 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 01:58:30 old-k8s-version-096771 kubelet[12965]: F0229 01:58:30.769193   12965 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 01:58:30 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (269.178189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-096771" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (519.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:58:35.606444  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:58:52.961502  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:58:55.670678  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:59:10.694367  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:59:42.886844  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:59:46.679644  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:59:55.516927  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 01:59:57.863232  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:16.007331  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:33.746618  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:41.966792  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:41.972067  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:41.982385  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:42.002667  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:42.043044  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:42.123390  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:42.283869  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:42.604062  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
E0229 02:00:43.245129  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:44.525957  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:47.087934  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:00:52.208284  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:02.448701  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:05.332856  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:09.723696  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:11.262505  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:22.929210  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:34.117868  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.123144  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.133409  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.153713  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.194043  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.274518  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:34.434976  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:34.755231  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
E0229 02:01:35.396217  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:36.676675  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:39.237533  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:44.239447  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 02:01:44.358740  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:01:54.599503  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:03.890038  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:15.079968  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:28.297677  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 02:02:28.378970  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:34.308351  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:56.040239  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:02:57.028375  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:03:07.285636  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:03:25.811176  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:03:35.606010  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:03:52.961383  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:03:55.671577  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:10.694222  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:17.961355  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:20.074233  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:42.886971  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:46.679980  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:55.517659  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:57.863346  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:04:58.652288  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:05:18.714746  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:05:41.966846  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:05.332468  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:09.651732  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:11.262555  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:18.562500  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:20.913989  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:34.117888  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:06:44.239087  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:07:01.801942  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:07:28.297746  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (246.944629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-096771" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (235.330966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096771 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-384331 image list                          | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-133807             | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-133807                  | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-133807 image list                           | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| image   | no-preload-449532 image list                           | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| image   | default-k8s-diff-port-308557                           | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:53:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:53:36.885660  172338 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:53:36.885812  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.885823  172338 out.go:304] Setting ErrFile to fd 2...
	I0229 01:53:36.885830  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.886451  172338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:53:36.887445  172338 out.go:298] Setting JSON to false
	I0229 01:53:36.888850  172338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5768,"bootTime":1709165849,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:53:36.888922  172338 start.go:139] virtualization: kvm guest
	I0229 01:53:36.890884  172338 out.go:177] * [newest-cni-133807] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:53:36.892679  172338 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:53:36.893863  172338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:53:36.892754  172338 notify.go:220] Checking for updates...
	I0229 01:53:36.895149  172338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:53:36.896330  172338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:53:36.897604  172338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:53:36.898902  172338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:53:36.900711  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:36.901271  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.901326  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.917325  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0229 01:53:36.917751  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.918470  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.918496  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.918925  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.919139  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.919426  172338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:53:36.919862  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.919920  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.935501  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0229 01:53:36.935929  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.936397  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.936423  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.936740  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.936966  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.975046  172338 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:53:36.976294  172338 start.go:299] selected driver: kvm2
	I0229 01:53:36.976310  172338 start.go:903] validating driver "kvm2" against &{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false n
ode_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.976488  172338 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:53:36.977258  172338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.977350  172338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:53:36.994597  172338 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:53:36.994975  172338 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 01:53:36.995042  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:53:36.995059  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:53:36.995069  172338 start_flags.go:323] config:
	{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Ex
posedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.995229  172338 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.997622  172338 out.go:177] * Starting control plane node newest-cni-133807 in cluster newest-cni-133807
	I0229 01:53:36.998696  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:53:36.998739  172338 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 01:53:36.998757  172338 cache.go:56] Caching tarball of preloaded images
	I0229 01:53:36.998845  172338 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:53:36.998863  172338 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 01:53:36.998993  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:36.999265  172338 start.go:365] acquiring machines lock for newest-cni-133807: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:53:36.999328  172338 start.go:369] acquired machines lock for "newest-cni-133807" in 34.294µs
	I0229 01:53:36.999350  172338 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:53:36.999359  172338 fix.go:54] fixHost starting: 
	I0229 01:53:36.999756  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.999804  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:37.014484  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0229 01:53:37.014854  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:37.015358  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:37.015380  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:37.015794  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:37.016017  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:37.016186  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:53:37.017841  172338 fix.go:102] recreateIfNeeded on newest-cni-133807: state=Stopped err=<nil>
	I0229 01:53:37.017866  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	W0229 01:53:37.018024  172338 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:53:37.019758  172338 out.go:177] * Restarting existing kvm2 VM for "newest-cni-133807" ...
	I0229 01:53:35.187854  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:37.188009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:35.706584  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:38.207259  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:36.771905  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.771929  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:36.771974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:36.795209  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.795242  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:36.795305  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:36.818025  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.818055  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:36.818111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:36.845202  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.845228  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:36.845238  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:36.845249  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:36.863710  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:36.863746  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:36.941560  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:36.941585  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:36.941599  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:36.985345  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:36.985374  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:37.049297  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:37.049331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:39.600693  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:39.614787  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:39.637491  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.637520  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:39.637579  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:39.655913  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.655934  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:39.655974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:39.673860  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.673884  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:39.673948  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:39.694282  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.694306  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:39.694362  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:39.713273  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.713298  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:39.713354  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:39.738601  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.738637  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:39.738694  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:39.757911  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.757946  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:39.758003  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:39.785844  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.785876  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:39.785889  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:39.785923  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:39.890021  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:39.890046  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:39.890063  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:39.946696  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:39.946738  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:40.011265  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:40.011294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:40.061033  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:40.061066  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:37.020899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Start
	I0229 01:53:37.021060  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring networks are active...
	I0229 01:53:37.021715  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network default is active
	I0229 01:53:37.022109  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network mk-newest-cni-133807 is active
	I0229 01:53:37.022542  172338 main.go:141] libmachine: (newest-cni-133807) Getting domain xml...
	I0229 01:53:37.023299  172338 main.go:141] libmachine: (newest-cni-133807) Creating domain...
	I0229 01:53:38.239149  172338 main.go:141] libmachine: (newest-cni-133807) Waiting to get IP...
	I0229 01:53:38.240362  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.240876  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.240965  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.240868  172372 retry.go:31] will retry after 275.310864ms: waiting for machine to come up
	I0229 01:53:38.517440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.518160  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.518185  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.518111  172372 retry.go:31] will retry after 317.329288ms: waiting for machine to come up
	I0229 01:53:38.836647  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.837248  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.837276  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.837187  172372 retry.go:31] will retry after 392.589727ms: waiting for machine to come up
	I0229 01:53:39.231732  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.232246  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.232285  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.232194  172372 retry.go:31] will retry after 424.503594ms: waiting for machine to come up
	I0229 01:53:39.658948  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.659654  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.659681  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.659612  172372 retry.go:31] will retry after 509.777965ms: waiting for machine to come up
	I0229 01:53:40.171487  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.172122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.172152  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.172076  172372 retry.go:31] will retry after 742.622621ms: waiting for machine to come up
	I0229 01:53:40.915896  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.916440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.916470  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.916388  172372 retry.go:31] will retry after 749.503001ms: waiting for machine to come up
	I0229 01:53:41.667865  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:41.668416  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:41.668460  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:41.668341  172372 retry.go:31] will retry after 899.624948ms: waiting for machine to come up
	I0229 01:53:39.686755  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:41.687219  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:40.705623  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.708440  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.579474  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:42.594968  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:42.614588  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.614619  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:42.614678  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:42.633590  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.633626  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:42.633675  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:42.650641  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.650670  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:42.650725  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:42.667825  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.667848  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:42.667896  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:42.687222  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.687250  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:42.687306  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:42.707192  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.707221  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:42.707283  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:42.727815  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.727842  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:42.727909  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:42.747315  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.747344  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:42.747358  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:42.747373  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:42.835128  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.835153  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:42.835166  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:42.878670  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:42.878704  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:42.938260  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:42.938295  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:42.988986  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:42.989023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.504852  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:45.519775  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:45.544878  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.544907  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:45.544956  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:45.564358  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.564392  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:45.564452  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:45.585154  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.585184  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:45.585248  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:45.605709  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.605739  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:45.605811  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:45.623803  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.623890  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:45.623962  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:45.643133  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.643164  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:45.643234  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:45.661762  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.661802  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:45.661861  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:45.680592  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.680620  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:45.680634  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:45.680649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:45.745642  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:45.745700  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:45.823069  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:45.823109  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:45.892445  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:45.892486  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.910297  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:45.910333  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:45.990129  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.569261  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:42.569902  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:42.569929  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:42.569879  172372 retry.go:31] will retry after 1.844906669s: waiting for machine to come up
	I0229 01:53:44.416650  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:44.417122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:44.417147  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:44.417082  172372 retry.go:31] will retry after 1.668166694s: waiting for machine to come up
	I0229 01:53:46.086877  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:46.087409  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:46.087439  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:46.087360  172372 retry.go:31] will retry after 2.357310139s: waiting for machine to come up
	I0229 01:53:44.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:46.187804  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.689109  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:45.205820  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:47.207153  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:49.207534  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.491272  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:48.505184  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:48.525599  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.525629  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:48.525706  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:48.546500  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.546532  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:48.546594  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:48.568626  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.568658  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:48.568721  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:48.587381  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.587414  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:48.587473  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:48.605940  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.605978  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:48.606036  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:48.627862  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.627939  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:48.627990  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:48.647290  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.647337  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:48.647409  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:48.668387  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.668421  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:48.668436  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:48.668465  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:48.749495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:48.749564  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:48.768497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:48.768537  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:48.851955  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:48.851986  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:48.852007  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:48.897006  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:48.897051  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:51.469648  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:51.483142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:51.505315  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.505336  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:51.505382  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:51.527266  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.527291  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:51.527349  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:51.549665  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.549695  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:51.549762  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:51.567017  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.567048  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:51.567115  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:51.584257  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.584283  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:51.584330  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:51.601100  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.601120  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:51.601162  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:51.617334  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.617364  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:51.617412  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:51.634847  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.634870  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:51.634884  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:51.634906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:51.699822  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:51.699852  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:51.699874  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:51.748726  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:51.748767  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:48.446918  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:48.447458  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:48.447486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:48.447405  172372 retry.go:31] will retry after 3.5649966s: waiting for machine to come up
	I0229 01:53:50.690417  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:53.186096  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.706757  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:54.207589  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.821091  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:51.821125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:51.870732  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:51.870762  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.385901  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:54.399480  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:54.417966  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.417996  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:54.418059  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:54.436602  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.436625  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:54.436671  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:54.454846  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.454871  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:54.454929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:54.475020  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.475052  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:54.475106  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:54.492090  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.492124  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:54.492179  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:54.508529  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.508552  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:54.508612  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:54.525505  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.525532  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:54.525592  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:54.542182  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.542205  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:54.542217  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:54.542231  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:54.591034  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:54.591075  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.607014  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:54.607059  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:54.673259  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:54.673277  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:54.673294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:54.735883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:54.735933  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:52.015966  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:52.016461  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:52.016486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:52.016421  172372 retry.go:31] will retry after 3.221741445s: waiting for machine to come up
	I0229 01:53:55.241903  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242455  172338 main.go:141] libmachine: (newest-cni-133807) Found IP for machine: 192.168.50.38
	I0229 01:53:55.242486  172338 main.go:141] libmachine: (newest-cni-133807) Reserving static IP address...
	I0229 01:53:55.242513  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has current primary IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.242982  172338 main.go:141] libmachine: (newest-cni-133807) Reserved static IP address: 192.168.50.38
	I0229 01:53:55.243002  172338 main.go:141] libmachine: (newest-cni-133807) DBG | skip adding static IP to network mk-newest-cni-133807 - found existing host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"}
	I0229 01:53:55.243021  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Getting to WaitForSSH function...
	I0229 01:53:55.243051  172338 main.go:141] libmachine: (newest-cni-133807) Waiting for SSH to be available...
	I0229 01:53:55.245263  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245602  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.245635  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245719  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH client type: external
	I0229 01:53:55.245756  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa (-rw-------)
	I0229 01:53:55.245815  172338 main.go:141] libmachine: (newest-cni-133807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:53:55.245837  172338 main.go:141] libmachine: (newest-cni-133807) DBG | About to run SSH command:
	I0229 01:53:55.245849  172338 main.go:141] libmachine: (newest-cni-133807) DBG | exit 0
	I0229 01:53:55.365823  172338 main.go:141] libmachine: (newest-cni-133807) DBG | SSH cmd err, output: <nil>: 
	I0229 01:53:55.366165  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetConfigRaw
	I0229 01:53:55.366733  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.369039  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369334  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.369365  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369634  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:55.369878  172338 machine.go:88] provisioning docker machine ...
	I0229 01:53:55.369899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.370074  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370280  172338 buildroot.go:166] provisioning hostname "newest-cni-133807"
	I0229 01:53:55.370305  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.372352  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372683  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.372714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372826  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.373050  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373221  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.373545  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.373765  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.373801  172338 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-133807 && echo "newest-cni-133807" | sudo tee /etc/hostname
	I0229 01:53:55.501380  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-133807
	
	I0229 01:53:55.501425  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.504532  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.504925  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.504953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.505203  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.505442  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505829  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.505993  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.506180  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.506197  172338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133807/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:53:55.627363  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:53:55.627403  172338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:53:55.627445  172338 buildroot.go:174] setting up certificates
	I0229 01:53:55.627465  172338 provision.go:83] configureAuth start
	I0229 01:53:55.627478  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.627799  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.630746  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631187  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.631216  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631361  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.633714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634069  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.634098  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634214  172338 provision.go:138] copyHostCerts
	I0229 01:53:55.634269  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:53:55.634288  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:53:55.634356  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:53:55.634447  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:53:55.634455  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:53:55.634478  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:53:55.634526  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:53:55.634534  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:53:55.634553  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:53:55.634601  172338 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133807 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube newest-cni-133807]
	I0229 01:53:55.739651  172338 provision.go:172] copyRemoteCerts
	I0229 01:53:55.739705  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:53:55.739730  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.742433  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.742797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.742821  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.743006  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.743211  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.743367  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.743503  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:55.825143  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:53:55.850150  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 01:53:55.873623  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:53:55.897271  172338 provision.go:86] duration metric: configureAuth took 269.790188ms
	I0229 01:53:55.897298  172338 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:53:55.897528  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:55.897558  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.897880  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.900413  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900726  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.900754  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900862  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.901029  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.901575  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.901796  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.901811  172338 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:53:56.003790  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:53:56.003817  172338 buildroot.go:70] root file system type: tmpfs
	I0229 01:53:56.003960  172338 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:53:56.003989  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.006912  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007266  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.007291  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007470  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.007629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007793  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.008184  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.008354  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.008418  172338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:53:56.124499  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:53:56.124533  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.127457  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127793  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.127829  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127968  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.128151  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128308  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128498  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.128680  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.128833  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.128852  172338 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:53:55.187275  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.189486  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:56.706921  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:59.205557  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.106913  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:53:57.106944  172338 machine.go:91] provisioned docker machine in 1.737051901s
	I0229 01:53:57.106958  172338 start.go:300] post-start starting for "newest-cni-133807" (driver="kvm2")
	I0229 01:53:57.106971  172338 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:53:57.106987  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.107348  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:53:57.107378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.109947  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110278  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.110306  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110419  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.110655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.110847  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.110998  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.195254  172338 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:53:57.199660  172338 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:53:57.199686  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:53:57.199749  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:53:57.199861  172338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:53:57.199978  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:53:57.211667  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:53:57.236009  172338 start.go:303] post-start completed in 129.030126ms
	I0229 01:53:57.236038  172338 fix.go:56] fixHost completed within 20.236678345s
	I0229 01:53:57.236066  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.239097  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239405  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.239428  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239632  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.239810  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.239990  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.240135  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.240351  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:57.240577  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:57.240592  172338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:53:57.347803  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171637.329083069
	
	I0229 01:53:57.347829  172338 fix.go:206] guest clock: 1709171637.329083069
	I0229 01:53:57.347839  172338 fix.go:219] Guest: 2024-02-29 01:53:57.329083069 +0000 UTC Remote: 2024-02-29 01:53:57.236042976 +0000 UTC m=+20.403256492 (delta=93.040093ms)
	I0229 01:53:57.347867  172338 fix.go:190] guest clock delta is within tolerance: 93.040093ms
	I0229 01:53:57.347875  172338 start.go:83] releasing machines lock for "newest-cni-133807", held for 20.348533837s
	I0229 01:53:57.347898  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.348162  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:57.350842  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351284  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.351312  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351648  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352219  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352485  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352599  172338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:53:57.352685  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.352765  172338 ssh_runner.go:195] Run: cat /version.json
	I0229 01:53:57.352801  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.355935  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356331  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356570  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.356571  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356764  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.356906  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356923  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.356930  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.357085  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.357144  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.357257  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.357402  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.357558  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.439867  172338 ssh_runner.go:195] Run: systemctl --version
	I0229 01:53:57.461722  172338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:53:57.469492  172338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:53:57.469553  172338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:53:57.488804  172338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:53:57.488832  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.488972  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.510573  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 01:53:57.522254  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:53:57.533175  172338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:53:57.533265  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:53:57.544648  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.556155  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:53:57.568806  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.579441  172338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:53:57.591000  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:53:57.602790  172338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:53:57.612548  172338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:53:57.622708  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:57.774983  172338 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:53:57.803366  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.803462  172338 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:53:57.819377  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.835552  172338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:53:57.855766  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.870321  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.882616  172338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:53:57.906767  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.919519  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.937892  172338 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:53:57.941557  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:53:57.950404  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:53:57.966732  172338 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:53:58.084501  172338 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:53:58.208172  172338 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:53:58.208327  172338 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:53:58.231616  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:58.339214  172338 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:53:59.877873  172338 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.53860785s)
	I0229 01:53:59.877980  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 01:53:59.892601  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:53:59.908111  172338 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 01:54:00.026741  172338 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 01:54:00.150989  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.270596  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 01:54:00.292845  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:54:00.310771  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.442177  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 01:54:00.520800  172338 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 01:54:00.520874  172338 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 01:54:00.527623  172338 start.go:543] Will wait 60s for crictl version
	I0229 01:54:00.527683  172338 ssh_runner.go:195] Run: which crictl
	I0229 01:54:00.532463  172338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:54:00.599208  172338 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 01:54:00.599291  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.627562  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.655024  172338 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 01:54:00.655069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:54:00.658010  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658343  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:00.658372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658608  172338 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 01:54:00.662943  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.679113  172338 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 01:53:57.304118  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:57.317352  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:57.334647  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.334674  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:57.334724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:57.354591  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.354620  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:57.354664  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:57.378535  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.378558  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:57.378613  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:57.398944  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.398973  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:57.399019  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:57.419479  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.419500  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:57.419544  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:57.435860  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.435888  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:57.435942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:57.453347  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.453383  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:57.453430  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:57.473140  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.473168  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:57.473182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:57.473196  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:57.526048  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:57.526079  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:57.541246  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:57.541271  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:57.616011  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:57.616037  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:57.616052  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:57.658815  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:57.658856  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.228028  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:00.242250  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:00.260188  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.260217  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:00.260277  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:00.279694  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.279717  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:00.279768  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:00.300245  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.300276  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:00.300331  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:00.321402  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.321423  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:00.321484  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:00.341221  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.341252  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:00.341309  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:00.359202  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.359228  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:00.359274  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:00.377486  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.377515  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:00.377566  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:00.396751  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.396780  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:00.396792  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:00.396804  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:00.411321  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:00.411354  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:00.486044  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:00.486070  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:00.486086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:00.533467  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:00.533493  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.601400  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:00.601429  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:00.680518  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:54:00.680595  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.699558  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.699582  172338 docker.go:615] Images already preloaded, skipping extraction
	I0229 01:54:00.699651  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.720362  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.720382  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:00.720435  172338 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:54:00.750538  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:00.750564  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:00.750582  172338 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 01:54:00.750604  172338 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133807 NodeName:newest-cni-133807 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:54:00.750845  172338 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-133807"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:54:00.750974  172338 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-133807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:54:00.751053  172338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 01:54:00.763338  172338 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:54:00.763421  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:54:00.774930  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0229 01:54:00.795559  172338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 01:54:00.816378  172338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0229 01:54:00.836392  172338 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0229 01:54:00.841301  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.855335  172338 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807 for IP: 192.168.50.38
	I0229 01:54:00.855370  172338 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:00.855555  172338 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:54:00.855595  172338 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:54:00.855699  172338 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/client.key
	I0229 01:54:00.855776  172338 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key.01da567d
	I0229 01:54:00.855837  172338 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key
	I0229 01:54:00.856003  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:54:00.856056  172338 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:54:00.856071  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:54:00.856107  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:54:00.856141  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:54:00.856172  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:54:00.856231  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:54:00.856935  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:54:00.884304  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:54:00.909114  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:54:00.932767  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:54:00.957174  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:54:00.982424  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:54:01.005673  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:54:01.029470  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:54:01.056951  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:54:01.080261  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:54:01.104850  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:54:01.128318  172338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:54:01.145321  172338 ssh_runner.go:195] Run: openssl version
	I0229 01:54:01.150792  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:54:01.162288  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166729  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166774  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.172237  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:54:01.183583  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:54:01.195364  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199820  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199890  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.205840  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:54:01.217694  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:54:01.229231  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233770  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233841  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.239419  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:54:01.250900  172338 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:54:01.255351  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:54:01.261364  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:54:01.267843  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:54:01.273917  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:54:01.279780  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:54:01.285722  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:54:01.295181  172338 kubeadm.go:404] StartCluster: {Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false sy
stem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:54:01.295318  172338 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:01.327657  172338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:54:01.340602  172338 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:54:01.340626  172338 kubeadm.go:636] restartCluster start
	I0229 01:54:01.340676  172338 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:54:01.351659  172338 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.352394  172338 kubeconfig.go:135] verify returned: extract IP: "newest-cni-133807" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:01.352778  172338 kubeconfig.go:146] "newest-cni-133807" context is missing from /home/jenkins/minikube-integration/18063-115328/kubeconfig - will repair!
	I0229 01:54:01.353471  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:01.354935  172338 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:54:01.365295  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.365346  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.379525  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.866175  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.866250  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.880632  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:53:59.689914  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.694344  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.208129  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.705473  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.160372  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:03.174216  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:03.193976  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.193997  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:03.194047  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:03.212210  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.212237  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:03.212282  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:03.229155  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.229178  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:03.229223  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:03.248201  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.248224  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:03.248287  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:03.267884  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.267908  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:03.267952  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:03.287746  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.287770  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:03.287821  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:03.306938  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.306967  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:03.307016  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:03.326486  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.326519  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:03.326534  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:03.326549  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:03.395132  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:03.395184  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:03.412879  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:03.412913  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:03.482097  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:03.482120  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:03.482132  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:03.525422  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:03.525455  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.083568  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:06.096663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:06.114370  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.114400  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:06.114445  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:06.131116  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.131136  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:06.131180  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:06.147183  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.147206  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:06.147261  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:06.163312  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.163335  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:06.163381  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:06.180224  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.180248  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:06.180302  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:06.197599  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.197627  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:06.197682  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:06.215691  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.215711  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:06.215756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:06.232575  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.232594  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:06.232606  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:06.232619  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:06.274143  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:06.274169  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.333535  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:06.333568  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:06.385263  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:06.385291  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:06.399965  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:06.399998  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:06.462490  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:02.365814  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.365888  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.381326  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:02.865848  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.865928  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.881269  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.365397  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.365478  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.380922  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.865482  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.865596  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.879430  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.366070  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.366183  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.381485  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.866086  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.866191  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.879535  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.366159  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.366268  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.379573  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.865791  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.865883  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.881058  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.365561  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.365642  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.379122  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.865926  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.879810  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.186274  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:06.187331  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.687316  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:05.705984  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:07.706819  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.962748  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:08.979756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:08.996761  170748 logs.go:276] 0 containers: []
	W0229 01:54:08.996786  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:08.996840  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:09.020061  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.020088  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:09.020144  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:09.042548  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.042578  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:09.042633  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:09.072428  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.072461  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:09.072525  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:09.089193  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.089216  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:09.089262  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:09.107143  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.107170  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:09.107220  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:09.125208  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.125228  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:09.125268  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:09.143488  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.143511  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:09.143522  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:09.143535  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:09.214360  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:09.214382  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:09.214395  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:09.256462  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:09.256492  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:09.312362  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:09.312392  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:09.362596  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:09.362630  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:07.365617  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.365729  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.379799  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:07.865347  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.865455  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.879417  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.366028  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.366123  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.380127  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.865702  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.865849  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.880014  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.365550  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.365632  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.382898  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.865431  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.865510  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.879281  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.365768  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.365864  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.380308  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.865941  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.879469  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.366107  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:11.366212  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:11.380134  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.380168  172338 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 01:54:11.380204  172338 kubeadm.go:1135] stopping kube-system containers ...
	I0229 01:54:11.380272  172338 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:11.400551  172338 docker.go:483] Stopping containers: [b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292]
	I0229 01:54:11.400620  172338 ssh_runner.go:195] Run: docker stop b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292
	I0229 01:54:11.420276  172338 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 01:54:11.442755  172338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:11.452745  172338 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:11.452816  172338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462724  172338 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462746  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:11.576479  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:10.687632  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:13.188979  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:09.707636  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:12.206349  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:14.206598  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:11.880988  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:11.894918  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:11.915749  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.915777  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:11.915837  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:11.933269  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.933295  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:11.933388  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:11.950460  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.950483  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:11.950530  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:11.966919  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.966943  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:11.967004  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:11.987487  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.987519  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:11.987602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:12.011234  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.011265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:12.011324  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:12.039057  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.039083  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:12.039140  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:12.062016  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.062047  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:12.062061  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:12.062078  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.116706  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:12.116744  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:12.176126  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:12.176156  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:12.234175  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:12.234210  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:12.249559  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:12.249597  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:12.321806  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:14.822521  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.837453  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:14.857687  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.857723  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:14.857804  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:14.879933  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.879966  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:14.880025  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:14.903296  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.903334  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:14.903477  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:14.924603  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.924635  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:14.924697  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:14.943135  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.943159  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:14.943218  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:14.961231  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.961265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:14.961326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:14.993744  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.993786  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:14.993857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:15.013656  170748 logs.go:276] 0 containers: []
	W0229 01:54:15.013686  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:15.013700  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:15.013714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:15.092540  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:15.092576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:15.162362  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:15.162406  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:15.178584  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:15.178612  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:15.256534  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:15.256560  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:15.256576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.722918  172338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146406214s)
	I0229 01:54:12.722946  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:12.927585  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.040907  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.139301  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:13.139384  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:13.640506  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.139790  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.640206  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.663070  172338 api_server.go:72] duration metric: took 1.523766735s to wait for apiserver process to appear ...
	I0229 01:54:14.663104  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:14.663126  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:14.663675  172338 api_server.go:269] stopped: https://192.168.50.38:8443/healthz: Get "https://192.168.50.38:8443/healthz": dial tcp 192.168.50.38:8443: connect: connection refused
	I0229 01:54:15.163277  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:15.190654  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.686359  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:16.207410  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:18.705701  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.942183  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.942214  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:17.942230  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:17.987284  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.987321  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:18.163519  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.168857  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.168891  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:18.663488  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.668213  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.668238  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.163425  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.171029  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:19.171065  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.664211  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.668342  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:19.675820  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:19.675849  172338 api_server.go:131] duration metric: took 5.012736256s to wait for apiserver health ...
	I0229 01:54:19.675858  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:19.675869  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:19.677686  172338 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 01:54:19.678985  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 01:54:19.690408  172338 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 01:54:19.711239  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:19.720671  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:19.720701  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:19.720709  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:19.720715  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:19.720723  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:19.720731  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 01:54:19.720736  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:19.720741  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:19.720761  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:19.720767  172338 system_pods.go:74] duration metric: took 9.509631ms to wait for pod list to return data ...
	I0229 01:54:19.720776  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:19.724321  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:19.724346  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:19.724358  172338 node_conditions.go:105] duration metric: took 3.577361ms to run NodePressure ...
	I0229 01:54:19.724376  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:20.003533  172338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:54:20.017015  172338 ops.go:34] apiserver oom_adj: -16
	I0229 01:54:20.017041  172338 kubeadm.go:640] restartCluster took 18.676407847s
	I0229 01:54:20.017053  172338 kubeadm.go:406] StartCluster complete in 18.721880164s
	I0229 01:54:20.017075  172338 settings.go:142] acquiring lock: {Name:mk324b2a181b324166fa2d8da3ad5d1101ca0339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.017158  172338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:20.018872  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.019139  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:54:20.019351  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019320  172338 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 01:54:20.019413  172338 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-133807"
	I0229 01:54:20.019429  172338 addons.go:69] Setting default-storageclass=true in profile "newest-cni-133807"
	I0229 01:54:20.019437  172338 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-133807"
	W0229 01:54:20.019445  172338 addons.go:243] addon storage-provisioner should already be in state true
	I0229 01:54:20.019445  172338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133807"
	I0229 01:54:20.019429  172338 cache.go:107] acquiring lock: {Name:mkf83f87b4b5efd9201d385629e40dc6af5715f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:54:20.019496  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019509  172338 cache.go:115] /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 01:54:20.019520  172338 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 106.029µs
	I0229 01:54:20.019530  172338 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 01:54:20.019528  172338 addons.go:69] Setting metrics-server=true in profile "newest-cni-133807"
	I0229 01:54:20.019539  172338 cache.go:87] Successfully saved all images to host disk.
	I0229 01:54:20.019551  172338 addons.go:234] Setting addon metrics-server=true in "newest-cni-133807"
	W0229 01:54:20.019561  172338 addons.go:243] addon metrics-server should already be in state true
	I0229 01:54:20.019604  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019735  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.019930  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020002  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020042  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020045  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020109  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020138  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020260  172338 addons.go:69] Setting dashboard=true in profile "newest-cni-133807"
	I0229 01:54:20.020302  172338 addons.go:234] Setting addon dashboard=true in "newest-cni-133807"
	W0229 01:54:20.020310  172338 addons.go:243] addon dashboard should already be in state true
	I0229 01:54:20.020476  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.020937  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.021009  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.029773  172338 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-133807" context rescaled to 1 replicas
	I0229 01:54:20.029823  172338 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:54:20.031663  172338 out.go:177] * Verifying Kubernetes components...
	I0229 01:54:20.033048  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:20.041914  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0229 01:54:20.041918  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0229 01:54:20.041966  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0229 01:54:20.041928  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0229 01:54:20.042220  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0229 01:54:20.042451  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042454  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042924  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043005  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043019  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043030  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043044  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043051  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043098  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043401  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043418  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043428  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.043543  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043555  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043558  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043567  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.044095  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044134  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.044332  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044374  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044404  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044425  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044925  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044970  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.045173  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045588  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.045633  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.047760  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.047785  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.049100  172338 addons.go:234] Setting addon default-storageclass=true in "newest-cni-133807"
	W0229 01:54:20.049123  172338 addons.go:243] addon default-storageclass should already be in state true
	I0229 01:54:20.049152  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.049548  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.049584  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.064541  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0229 01:54:20.065017  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065158  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I0229 01:54:20.065470  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065736  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065747  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.065986  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065997  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.066225  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066313  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066403  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.066481  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.068564  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0229 01:54:20.068997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.069067  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.069072  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.071190  172338 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 01:54:20.069506  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.072655  172338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:54:20.072680  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.074227  172338 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.074244  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:54:20.074265  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.072649  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 01:54:20.074288  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 01:54:20.074310  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.074704  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.074919  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.075229  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:20.075252  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.078346  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079734  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.079764  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080050  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.080073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080531  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080805  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.080854  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.081053  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081112  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081357  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081683  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081913  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0229 01:54:20.082210  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.082371  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082386  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0229 01:54:20.082793  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082934  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.082954  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.083003  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083017  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083155  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.083315  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083325  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.083400  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083505  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.083661  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083828  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.083874  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.083905  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.084097  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.085520  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.087522  172338 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 01:54:20.088944  172338 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 01:54:17.803447  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:17.818754  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:17.838257  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.838289  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:17.838351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:17.859095  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.859128  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:17.859188  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:17.880186  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.880219  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:17.880281  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:17.905367  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.905415  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:17.905476  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:17.926888  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.926913  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:17.926974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:17.948858  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.948884  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:17.948941  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:17.967835  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.967871  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:17.967930  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:17.999903  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.999935  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:17.999949  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:17.999963  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:18.066021  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:18.066065  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:18.091596  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:18.091621  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:18.167407  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:18.167429  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:18.167444  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:18.212978  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:18.213013  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:20.785493  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.802351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:20.825685  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.825720  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:20.825770  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:20.849013  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.849043  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:20.849111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:20.871166  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.871198  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:20.871249  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:20.889932  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.889963  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:20.890022  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:20.912390  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.912416  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:20.912492  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:20.931206  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.931233  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:20.931291  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:20.949663  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.949687  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:20.949739  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:20.967249  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.967277  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:20.967288  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:20.967299  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:21.062400  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:21.062428  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:21.062445  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:21.113883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:21.113924  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:21.180620  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:21.180659  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:21.236555  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:21.236589  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:20.090259  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 01:54:20.090273  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 01:54:20.090286  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.092728  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093153  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.093186  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093317  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.093479  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.093618  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.093732  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.118803  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0229 01:54:20.119213  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.119796  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.119825  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.120194  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.120440  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.121995  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.122309  172338 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.122327  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:54:20.122352  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.124725  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125104  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.125126  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.125513  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.125629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.125721  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.333837  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 01:54:20.333867  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 01:54:20.365581  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 01:54:20.365605  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 01:54:20.387559  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 01:54:20.387585  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 01:54:20.391190  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.394118  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.442370  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 01:54:20.442407  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 01:54:20.466973  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 01:54:20.467005  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 01:54:20.489843  172338 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 01:54:20.489843  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:20.489919  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:20.489940  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:20.489947  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.489953  172338 cache_images.go:262] succeeded pushing to: newest-cni-133807
	I0229 01:54:20.489960  172338 cache_images.go:263] failed pushing to: 
	I0229 01:54:20.489991  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490005  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490309  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490327  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.490335  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490342  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490620  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:20.490605  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490643  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.507250  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.507271  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 01:54:20.529738  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.572814  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 01:54:20.572836  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 01:54:20.614903  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 01:54:20.614929  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 01:54:20.698112  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 01:54:20.698133  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 01:54:20.767402  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 01:54:20.767429  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 01:54:20.833849  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 01:54:20.833880  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 01:54:20.894077  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:20.894100  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 01:54:20.947725  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:21.834822  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.440658264s)
	I0229 01:54:21.834862  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.443647567s)
	I0229 01:54:21.834881  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.834882  172338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.344911071s)
	I0229 01:54:21.834935  172338 api_server.go:72] duration metric: took 1.805074704s to wait for apiserver process to appear ...
	I0229 01:54:21.834954  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:21.834975  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:21.834886  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.834904  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835393  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835415  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835425  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835429  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835443  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835456  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835468  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835479  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835433  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835847  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835856  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835859  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835862  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835868  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835874  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.843384  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:21.844033  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.844056  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.844319  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.844354  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.844370  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.844766  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:21.844804  172338 api_server.go:131] duration metric: took 9.827817ms to wait for apiserver health ...
	I0229 01:54:21.844815  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:21.851946  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:21.851980  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:21.851990  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:21.852004  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:21.852013  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:21.852024  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running
	I0229 01:54:21.852032  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:21.852042  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:21.852052  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:21.852063  172338 system_pods.go:74] duration metric: took 7.238252ms to wait for pod list to return data ...
	I0229 01:54:21.852075  172338 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:54:21.855974  172338 default_sa.go:45] found service account: "default"
	I0229 01:54:21.856003  172338 default_sa.go:55] duration metric: took 3.916391ms for default service account to be created ...
	I0229 01:54:21.856020  172338 kubeadm.go:581] duration metric: took 1.826163486s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 01:54:21.856046  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:21.858351  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:21.858367  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:21.858377  172338 node_conditions.go:105] duration metric: took 2.326102ms to run NodePressure ...
	I0229 01:54:21.858387  172338 start.go:228] waiting for startup goroutines ...
	I0229 01:54:21.896983  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.367194081s)
	I0229 01:54:21.897048  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897070  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897356  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897372  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897386  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897669  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897686  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897701  172338 addons.go:470] Verifying addon metrics-server=true in "newest-cni-133807"
	I0229 01:54:22.315002  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.367214151s)
	I0229 01:54:22.315099  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315119  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315448  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315472  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315488  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.315512  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315524  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315830  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315843  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.317416  172338 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-133807 addons enable metrics-server
	
	I0229 01:54:22.318943  172338 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 01:54:22.320494  172338 addons.go:505] enable addons completed in 2.301194216s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 01:54:22.320539  172338 start.go:233] waiting for cluster config update ...
	I0229 01:54:22.320554  172338 start.go:242] writing updated cluster config ...
	I0229 01:54:22.320879  172338 ssh_runner.go:195] Run: rm -f paused
	I0229 01:54:22.378739  172338 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:54:22.380459  172338 out.go:177] * Done! kubectl is now configured to use "newest-cni-133807" cluster and "default" namespace by default
	I0229 01:54:19.687767  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:21.689355  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:20.707480  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:22.707979  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:23.754280  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:23.768586  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:23.793150  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.793172  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:23.793221  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:23.818865  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.818896  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:23.818949  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:23.838078  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.838105  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:23.838161  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:23.859213  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.859235  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:23.859279  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:23.878876  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.878901  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:23.878938  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:23.899317  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.899340  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:23.899387  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:23.916826  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.916851  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:23.916891  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:23.933713  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.933739  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:23.933752  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:23.933766  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:24.003099  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:24.003136  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:24.021001  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:24.021038  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:24.097013  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:24.097035  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:24.097050  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:24.145682  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:24.145714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:26.710373  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:26.724077  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:26.740532  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.740556  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:26.740603  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:24.187991  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.188081  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:28.688297  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:24.708094  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:27.205437  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:29.206577  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.758229  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.758251  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:26.758294  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:26.774881  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.774904  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:26.774971  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:26.790893  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.790913  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:26.790953  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:26.807273  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.807300  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:26.807359  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:26.824081  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.824107  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:26.824165  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:26.840770  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.840793  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:26.840851  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:26.856932  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.856966  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:26.856980  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:26.856995  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:26.907299  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:26.907331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:26.922552  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:26.922585  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:26.999079  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:26.999109  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:26.999125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:27.051061  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:27.051098  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:29.607727  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:29.622929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:29.641829  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.641861  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:29.641932  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:29.658732  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.658761  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:29.658825  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:29.676597  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.676619  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:29.676663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:29.695001  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.695030  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:29.695089  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:29.711947  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.711982  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:29.712038  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:29.728832  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.728860  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:29.728925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:29.744888  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.744907  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:29.744951  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:29.761144  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.761169  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:29.761182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:29.761192  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:29.810791  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:29.810823  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:29.824497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:29.824527  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:29.890825  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:29.890849  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:29.890865  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:29.934980  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:29.935023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:31.187022  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.686489  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:31.210173  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.705583  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:32.508161  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:32.523715  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:32.541751  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.541796  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:32.541860  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:32.559746  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.559772  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:32.559826  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:32.578867  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.578890  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:32.578942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:32.596025  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.596050  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:32.596104  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:32.613250  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.613277  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:32.613326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:32.629760  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.629808  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:32.629867  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:32.646940  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.646962  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:32.647034  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:32.666140  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.666167  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:32.666180  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:32.666194  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:32.718171  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:32.718206  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:32.732695  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:32.732720  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:32.796621  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:32.796642  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:32.796657  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:32.839872  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:32.839908  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:35.396632  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:35.412053  170748 kubeadm.go:640] restartCluster took 4m11.905401704s
	W0229 01:54:35.412153  170748 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 01:54:35.412183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:54:35.838651  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:35.854409  170748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:35.865129  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:35.875642  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:35.875696  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:54:36.022349  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:54:36.059938  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:54:36.131386  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:54:36.188327  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.686993  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:36.207432  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.706396  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.687792  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.188499  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.708268  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.206459  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.686549  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.689009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.705669  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.705839  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.187643  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.193029  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.205484  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.205628  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.205895  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.686931  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:57.185865  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:56.206104  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:58.707011  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:59.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:01.188066  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.687015  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:00.709471  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.205172  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:06.187463  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:08.686768  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:05.206413  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:07.706024  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:11.187247  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:13.686761  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:10.205156  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:12.205766  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:15.688395  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:18.186256  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:14.705829  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:17.206857  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:20.186585  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:22.186702  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:19.704997  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:21.706261  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:23.707958  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:24.187221  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.187591  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.687260  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.206739  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.705765  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.687620  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:32.688592  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.706982  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:33.208209  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.692999  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:37.189729  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.705863  169202 pod_ready.go:81] duration metric: took 4m0.00680066s waiting for pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace to be "Ready" ...
	E0229 01:55:34.705886  169202 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:55:34.705893  169202 pod_ready.go:38] duration metric: took 4m1.59715045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:55:34.705912  169202 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:55:34.705982  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:34.727306  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:34.727390  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:34.745657  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:34.745730  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:34.763604  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:34.763681  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:34.784535  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:34.784611  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:34.802288  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:34.802358  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:34.821502  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:34.821576  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:34.838522  169202 logs.go:276] 0 containers: []
	W0229 01:55:34.838548  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:34.838600  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:34.855799  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:34.855896  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:34.872982  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:34.873012  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:34.873023  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:34.895617  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:34.895647  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:34.915617  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:34.915645  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:34.989082  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:34.989112  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:35.017467  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:35.017495  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:35.046564  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:35.046591  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:35.068469  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:35.068499  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:35.098606  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:35.098636  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:35.125553  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:35.125589  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:35.171952  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:35.171993  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:35.233201  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:35.233241  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:35.291798  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.292005  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298118  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298323  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.321468  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:35.321511  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:35.338552  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:35.338582  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:35.453569  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453597  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:35.453663  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:35.453677  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453687  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453703  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453716  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.453727  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453740  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:39.687296  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:42.187476  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:44.189760  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:46.686245  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:48.687170  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:45.455294  169202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:55:45.470848  169202 api_server.go:72] duration metric: took 4m14.039378333s to wait for apiserver process to appear ...
	I0229 01:55:45.470876  169202 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:55:45.470953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:45.489614  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:45.489694  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:45.507881  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:45.507953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:45.540532  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:45.540609  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:45.560035  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:45.560134  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:45.579280  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:45.579376  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:45.597768  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:45.597865  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:45.618789  169202 logs.go:276] 0 containers: []
	W0229 01:55:45.618814  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:45.618860  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:45.638075  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:45.638159  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:45.656571  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:45.656611  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:45.656627  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:45.686218  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:45.686254  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:45.709338  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:45.709370  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:45.755652  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:45.755689  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:45.822848  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:45.822883  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:45.879421  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.879584  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885205  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885368  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:45.906780  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:45.906805  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:45.922651  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:45.922688  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:45.956685  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:45.956715  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:45.980079  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:45.980108  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:46.000800  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:46.000828  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:46.059443  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:46.059478  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:46.157674  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:46.157708  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:46.179678  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:46.179710  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:46.225916  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.225953  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:46.226025  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:46.226043  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226051  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226062  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226068  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:46.226077  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.226084  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:51.187510  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:53.686827  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.187244  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:58.686099  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.228095  169202 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0229 01:55:56.232840  169202 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0229 01:55:56.233957  169202 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:55:56.233979  169202 api_server.go:131] duration metric: took 10.763095955s to wait for apiserver health ...
	I0229 01:55:56.233988  169202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:55:56.234055  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:56.257140  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:56.257221  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:56.286172  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:56.286263  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:56.305014  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:56.305084  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:56.326712  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:56.326787  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:56.347079  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:56.347145  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:56.367625  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:56.367692  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:56.385387  169202 logs.go:276] 0 containers: []
	W0229 01:55:56.385431  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:56.385480  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:56.403032  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:56.403097  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:56.422016  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:56.422055  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:56.422072  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:56.444017  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:56.444045  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:56.473118  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:56.473151  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:56.518781  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:56.518819  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:56.542772  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:56.542814  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:56.604186  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.604348  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611644  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611847  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:56.635056  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:56.635088  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:56.649472  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:56.649496  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:56.763663  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:56.763696  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:56.793607  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:56.793638  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:56.857562  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:56.857597  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:56.924313  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:56.924343  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:56.962407  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:56.962436  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:56.985427  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:56.985458  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:57.007649  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007675  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:57.007729  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:57.007740  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007748  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007760  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007769  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:57.007777  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007785  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:00.687363  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:03.187734  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.019205  169202 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:07.019240  169202 system_pods.go:61] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.019246  169202 system_pods.go:61] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.019252  169202 system_pods.go:61] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.019257  169202 system_pods.go:61] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.019262  169202 system_pods.go:61] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.019266  169202 system_pods.go:61] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.019275  169202 system_pods.go:61] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.019281  169202 system_pods.go:61] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.019292  169202 system_pods.go:74] duration metric: took 10.78529776s to wait for pod list to return data ...
	I0229 01:56:07.019300  169202 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:07.021795  169202 default_sa.go:45] found service account: "default"
	I0229 01:56:07.021822  169202 default_sa.go:55] duration metric: took 2.513891ms for default service account to be created ...
	I0229 01:56:07.021833  169202 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:07.027968  169202 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:07.027991  169202 system_pods.go:89] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.027999  169202 system_pods.go:89] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.028006  169202 system_pods.go:89] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.028012  169202 system_pods.go:89] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.028021  169202 system_pods.go:89] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.028028  169202 system_pods.go:89] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.028044  169202 system_pods.go:89] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.028053  169202 system_pods.go:89] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.028065  169202 system_pods.go:126] duration metric: took 6.224923ms to wait for k8s-apps to be running ...
	I0229 01:56:07.028076  169202 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:07.028144  169202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:07.043579  169202 system_svc.go:56] duration metric: took 15.495808ms WaitForService to wait for kubelet.
	I0229 01:56:07.043608  169202 kubeadm.go:581] duration metric: took 4m35.612143208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:07.043638  169202 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:07.046428  169202 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:07.046447  169202 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:07.046457  169202 node_conditions.go:105] duration metric: took 2.814262ms to run NodePressure ...
	I0229 01:56:07.046469  169202 start.go:228] waiting for startup goroutines ...
	I0229 01:56:07.046475  169202 start.go:233] waiting for cluster config update ...
	I0229 01:56:07.046485  169202 start.go:242] writing updated cluster config ...
	I0229 01:56:07.046741  169202 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:07.095609  169202 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:56:07.097736  169202 out.go:177] * Done! kubectl is now configured to use "no-preload-449532" cluster and "default" namespace by default
	I0229 01:56:05.188374  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.188627  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:09.688264  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:12.188346  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:14.686751  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:16.687139  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:18.187973  169852 pod_ready.go:81] duration metric: took 4m0.008139239s waiting for pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace to be "Ready" ...
	E0229 01:56:18.187998  169852 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:56:18.188006  169852 pod_ready.go:38] duration metric: took 4m0.805438302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:56:18.188024  169852 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:56:18.188086  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:18.208854  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:18.208946  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:18.227659  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:18.227750  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:18.246475  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:18.246552  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:18.268583  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:18.268661  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:18.287872  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:18.287962  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:18.306446  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:18.306527  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:18.325914  169852 logs.go:276] 0 containers: []
	W0229 01:56:18.325943  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:18.325996  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:18.345838  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:18.345948  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:18.365691  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:18.365744  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:18.365763  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:18.390529  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:18.390558  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:18.441681  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:18.441715  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:18.521769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:18.522020  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:18.546113  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:18.546149  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:18.564900  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:18.564934  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:18.713864  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:18.713900  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:18.751902  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:18.752004  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:18.798480  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:18.798507  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:18.845423  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:18.845452  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:18.873120  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:18.873144  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:18.898180  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:18.898209  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:18.920066  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:18.920097  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:18.991663  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:18.991695  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:19.060048  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060079  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:19.060145  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:19.060170  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:19.060184  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:19.060198  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060209  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.235880  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:56:32.236029  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:56:32.238423  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:56:32.238502  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:56:32.238599  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:56:32.238744  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:56:32.238904  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:56:32.239073  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:56:32.239200  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:56:32.239271  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:56:32.239350  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:56:32.241126  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:56:32.241192  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:56:32.241251  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:56:32.241317  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:56:32.241394  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:56:32.241469  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:56:32.241523  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:56:32.241605  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:56:32.241700  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:56:32.241811  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:56:32.241921  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:56:32.242001  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:56:32.242081  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:56:32.242164  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:56:32.242247  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:56:32.242344  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:56:32.242427  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:56:32.242484  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:56:29.061463  169852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:56:29.077717  169852 api_server.go:72] duration metric: took 4m14.467720845s to wait for apiserver process to appear ...
	I0229 01:56:29.077739  169852 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:56:29.077840  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:29.096876  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:29.096961  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:29.114345  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:29.114423  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:29.131634  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:29.131705  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:29.149068  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:29.149139  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:29.166411  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:29.166483  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:29.182906  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:29.182982  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:29.199536  169852 logs.go:276] 0 containers: []
	W0229 01:56:29.199556  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:29.199599  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:29.218889  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:29.218951  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:29.237207  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:29.237245  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:29.237258  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:29.253233  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:29.253267  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:29.379843  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:29.379871  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:29.411795  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:29.411822  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:29.438557  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:29.438583  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:29.459479  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:29.459505  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:29.507590  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:29.507620  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:29.573263  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.573453  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.595549  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:29.595574  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:29.637026  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:29.637058  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:29.658572  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:29.658603  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:29.683814  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:29.683844  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:29.705482  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:29.705511  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:29.768497  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:29.768531  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:29.836247  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836270  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:29.836320  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:29.836331  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.836339  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.836350  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836360  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.244633  170748 out.go:204]   - Booting up control plane ...
	I0229 01:56:32.244727  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:56:32.244807  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:56:32.244884  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:56:32.244992  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:56:32.245189  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:56:32.245267  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:56:32.245360  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245532  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245599  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245746  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245826  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245998  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246093  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246273  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246359  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246574  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246588  170748 kubeadm.go:322] 
	I0229 01:56:32.246630  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:56:32.246679  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:56:32.246693  170748 kubeadm.go:322] 
	I0229 01:56:32.246740  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:56:32.246791  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:56:32.246892  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:56:32.246905  170748 kubeadm.go:322] 
	I0229 01:56:32.247026  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:56:32.247072  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:56:32.247116  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:56:32.247124  170748 kubeadm.go:322] 
	I0229 01:56:32.247212  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:56:32.247289  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:56:32.247361  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:56:32.247406  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:56:32.247488  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:56:32.247541  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 01:56:32.247677  170748 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:56:32.247732  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:56:32.689675  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:32.704123  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:56:32.713829  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:56:32.713881  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:56:32.847290  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:56:32.879658  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:56:32.959513  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:56:39.838133  169852 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8444/healthz ...
	I0229 01:56:39.843637  169852 api_server.go:279] https://192.168.72.56:8444/healthz returned 200:
	ok
	I0229 01:56:39.844896  169852 api_server.go:141] control plane version: v1.28.4
	I0229 01:56:39.844921  169852 api_server.go:131] duration metric: took 10.767174552s to wait for apiserver health ...
	I0229 01:56:39.844930  169852 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:56:39.845005  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:39.867188  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:39.867264  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:39.890265  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:39.890345  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:39.911540  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:39.911617  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:39.939266  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:39.939340  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:39.957270  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:39.957337  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:39.974956  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:39.975025  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:39.991794  169852 logs.go:276] 0 containers: []
	W0229 01:56:39.991815  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:39.991856  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:40.009143  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:40.009208  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:40.026359  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:40.026392  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:40.026406  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:40.046944  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:40.046969  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:40.067580  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:40.067604  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:40.091791  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:40.091812  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:40.151587  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:40.151619  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:40.221769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.221978  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.247432  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:40.247466  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:40.283196  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:40.283227  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:40.305677  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:40.305703  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:40.333975  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:40.334003  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:40.402520  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:40.402558  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:40.418892  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:40.418926  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:40.537554  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:40.537597  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:40.576026  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:40.576067  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:40.622017  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622055  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:40.622123  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:40.622137  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.622147  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.622165  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622178  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:50.632890  169852 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:50.632919  169852 system_pods.go:61] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.632924  169852 system_pods.go:61] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.632929  169852 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.632933  169852 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.632936  169852 system_pods.go:61] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.632939  169852 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.632944  169852 system_pods.go:61] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.632948  169852 system_pods.go:61] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.632955  169852 system_pods.go:74] duration metric: took 10.788019346s to wait for pod list to return data ...
	I0229 01:56:50.632961  169852 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:50.636262  169852 default_sa.go:45] found service account: "default"
	I0229 01:56:50.636279  169852 default_sa.go:55] duration metric: took 3.313291ms for default service account to be created ...
	I0229 01:56:50.636292  169852 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:50.641677  169852 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:50.641698  169852 system_pods.go:89] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.641704  169852 system_pods.go:89] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.641710  169852 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.641714  169852 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.641718  169852 system_pods.go:89] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.641722  169852 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.641730  169852 system_pods.go:89] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.641736  169852 system_pods.go:89] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.641743  169852 system_pods.go:126] duration metric: took 5.445558ms to wait for k8s-apps to be running ...
	I0229 01:56:50.641749  169852 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:50.641806  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:50.660446  169852 system_svc.go:56] duration metric: took 18.690637ms WaitForService to wait for kubelet.
	I0229 01:56:50.660469  169852 kubeadm.go:581] duration metric: took 4m36.05047851s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:50.660486  169852 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:50.663507  169852 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:50.663526  169852 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:50.663537  169852 node_conditions.go:105] duration metric: took 3.04635ms to run NodePressure ...
	I0229 01:56:50.663547  169852 start.go:228] waiting for startup goroutines ...
	I0229 01:56:50.663552  169852 start.go:233] waiting for cluster config update ...
	I0229 01:56:50.663561  169852 start.go:242] writing updated cluster config ...
	I0229 01:56:50.663826  169852 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:50.710751  169852 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:56:50.712950  169852 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-308557" cluster and "default" namespace by default
	I0229 01:58:29.528786  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:58:29.528884  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:58:29.530491  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:58:29.530596  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:58:29.530680  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:58:29.530764  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:58:29.530861  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:58:29.530964  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:58:29.531068  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:58:29.531119  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:58:29.531176  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:58:29.532944  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:58:29.533047  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:58:29.533144  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:58:29.533247  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:58:29.533305  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:58:29.533379  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:58:29.533441  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:58:29.533506  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:58:29.533567  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:58:29.533636  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:58:29.533700  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:58:29.533744  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:58:29.533806  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:58:29.533878  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:58:29.533967  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:58:29.534067  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:58:29.534153  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:58:29.534217  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:58:29.535694  170748 out.go:204]   - Booting up control plane ...
	I0229 01:58:29.535778  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:58:29.535844  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:58:29.535904  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:58:29.535972  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:58:29.536127  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:58:29.536212  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:58:29.536285  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536458  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536538  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536729  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536791  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536941  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537007  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537189  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537267  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537495  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537513  170748 kubeadm.go:322] 
	I0229 01:58:29.537569  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:58:29.537626  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:58:29.537636  170748 kubeadm.go:322] 
	I0229 01:58:29.537685  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:58:29.537744  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:58:29.537903  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:58:29.537915  170748 kubeadm.go:322] 
	I0229 01:58:29.538065  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:58:29.538113  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:58:29.538174  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:58:29.538183  170748 kubeadm.go:322] 
	I0229 01:58:29.538325  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:58:29.538450  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:58:29.538581  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:58:29.538656  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:58:29.538743  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:58:29.538829  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:58:29.538866  170748 kubeadm.go:406] StartCluster complete in 8m6.061536028s
	I0229 01:58:29.538947  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:58:29.556117  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.556141  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:58:29.556205  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:58:29.572791  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.572812  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:58:29.572857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:58:29.589544  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.589565  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:58:29.589625  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:58:29.605410  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.605426  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:58:29.605472  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:58:29.621393  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.621412  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:58:29.621450  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:58:29.637671  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.637690  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:58:29.637732  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:58:29.653501  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.653533  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:58:29.653590  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:58:29.669033  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.669058  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:58:29.669072  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:58:29.669086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:58:29.722126  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:58:29.722161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:58:29.735919  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:58:29.735946  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:58:29.803585  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:58:29.803615  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:58:29.803629  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:58:29.843153  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:58:29.843183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 01:58:29.906091  170748 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:58:29.906150  170748 out.go:239] * 
	W0229 01:58:29.906209  170748 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.906231  170748 out.go:239] * 
	W0229 01:58:29.906995  170748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:58:29.910220  170748 out.go:177] 
	W0229 01:58:29.911536  170748 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.911581  170748 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:58:29.911600  170748 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:58:29.912937  170748 out.go:177] 
	
	
	==> Docker <==
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776150999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776206246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776256438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776308167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776347865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776476626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776540257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776622510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776676461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776885278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776965976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777030325Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777311132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777539525Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777641426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777854491Z" level=info msg="containerd successfully booted in 0.034774s"
	Feb 29 01:50:21 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:21.976247648Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.012708683Z" level=info msg="Loading containers: start."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.140588585Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.193875502Z" level=info msg="Loading containers: done."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209172228Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209243974Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241102168Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241236205Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:50:22 old-k8s-version-096771 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-29T02:07:32Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 01:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 01:50] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +2.425571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914813] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.071671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054332] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.114259] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.335012] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.127181] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +0.149601] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +5.311700] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.076969] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.064548] systemd-fstab-generator[1441]: Ignoring "noauto" option for root device
	[  +0.060768] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 01:54] systemd-fstab-generator[9475]: Ignoring "noauto" option for root device
	[  +0.059471] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 01:56] systemd-fstab-generator[11246]: Ignoring "noauto" option for root device
	[  +0.070220] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:07:32 up 17 min,  0 users,  load average: 0.05, 0.24, 0.19
	Linux old-k8s-version-096771 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:07:30 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:07:31 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 877.
	Feb 29 02:07:31 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:07:31 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: I0229 02:07:31.542161   20590 server.go:410] Version: v1.16.0
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: I0229 02:07:31.542376   20590 plugins.go:100] No cloud provider specified.
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: I0229 02:07:31.542388   20590 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: I0229 02:07:31.544597   20590 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: W0229 02:07:31.545455   20590 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: W0229 02:07:31.545538   20590 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 02:07:31 old-k8s-version-096771 kubelet[20590]: F0229 02:07:31.545569   20590 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:07:31 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:07:31 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:07:32 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 878.
	Feb 29 02:07:32 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:07:32 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: I0229 02:07:32.298097   20618 server.go:410] Version: v1.16.0
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: I0229 02:07:32.298285   20618 plugins.go:100] No cloud provider specified.
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: I0229 02:07:32.298296   20618 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: I0229 02:07:32.300323   20618 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: W0229 02:07:32.301152   20618 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: W0229 02:07:32.301235   20618 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 02:07:32 old-k8s-version-096771 kubelet[20618]: F0229 02:07:32.301263   20618 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:07:32 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:07:32 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (258.178658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-096771" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (356.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:07:45.933203  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:07:57.028590  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:08:35.605826  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:08:52.961319  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:08:55.670844  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:09:10.694314  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:09:42.887156  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:09:46.679433  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:09:55.517335  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:09:57.863190  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:10:31.345112  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:10:41.966421  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/no-preload-449532/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:11:05.332616  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:11:11.262323  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:11:34.118668  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/default-k8s-diff-port-308557/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:11:44.238952  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:12:28.297811  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
E0229 02:12:57.028586  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.59:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.59:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (256.897051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-096771" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-096771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-096771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.562µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-096771 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (227.71291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-096771 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-384331 image list                          | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| delete  | -p embed-certs-384331                                  | embed-certs-384331           | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:52 UTC |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:52 UTC | 29 Feb 24 01:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-133807             | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-133807                  | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-133807 --memory=2200 --alsologtostderr   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:53 UTC | 29 Feb 24 01:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-133807 image list                           | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| delete  | -p newest-cni-133807                                   | newest-cni-133807            | jenkins | v1.32.0 | 29 Feb 24 01:54 UTC | 29 Feb 24 01:54 UTC |
	| image   | no-preload-449532 image list                           | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| delete  | -p no-preload-449532                                   | no-preload-449532            | jenkins | v1.32.0 | 29 Feb 24 01:56 UTC | 29 Feb 24 01:56 UTC |
	| image   | default-k8s-diff-port-308557                           | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-308557 | jenkins | v1.32.0 | 29 Feb 24 01:57 UTC | 29 Feb 24 01:57 UTC |
	|         | default-k8s-diff-port-308557                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 01:53:36
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 01:53:36.885660  172338 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:53:36.885812  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.885823  172338 out.go:304] Setting ErrFile to fd 2...
	I0229 01:53:36.885830  172338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:53:36.886451  172338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:53:36.887445  172338 out.go:298] Setting JSON to false
	I0229 01:53:36.888850  172338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5768,"bootTime":1709165849,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 01:53:36.888922  172338 start.go:139] virtualization: kvm guest
	I0229 01:53:36.890884  172338 out.go:177] * [newest-cni-133807] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 01:53:36.892679  172338 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 01:53:36.893863  172338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 01:53:36.892754  172338 notify.go:220] Checking for updates...
	I0229 01:53:36.895149  172338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:53:36.896330  172338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 01:53:36.897604  172338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 01:53:36.898902  172338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 01:53:36.900711  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:36.901271  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.901326  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.917325  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0229 01:53:36.917751  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.918470  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.918496  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.918925  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.919139  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.919426  172338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 01:53:36.919862  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.919920  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:36.935501  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0229 01:53:36.935929  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:36.936397  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:36.936423  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:36.936740  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:36.936966  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:36.975046  172338 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 01:53:36.976294  172338 start.go:299] selected driver: kvm2
	I0229 01:53:36.976310  172338 start.go:903] validating driver "kvm2" against &{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false n
ode_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.976488  172338 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 01:53:36.977258  172338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.977350  172338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 01:53:36.994597  172338 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 01:53:36.994975  172338 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 01:53:36.995042  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:53:36.995059  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:53:36.995069  172338 start_flags.go:323] config:
	{Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Ex
posedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:53:36.995229  172338 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:53:36.997622  172338 out.go:177] * Starting control plane node newest-cni-133807 in cluster newest-cni-133807
	I0229 01:53:36.998696  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:53:36.998739  172338 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 01:53:36.998757  172338 cache.go:56] Caching tarball of preloaded images
	I0229 01:53:36.998845  172338 preload.go:174] Found /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 01:53:36.998863  172338 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 01:53:36.998993  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:36.999265  172338 start.go:365] acquiring machines lock for newest-cni-133807: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 01:53:36.999328  172338 start.go:369] acquired machines lock for "newest-cni-133807" in 34.294µs
	I0229 01:53:36.999350  172338 start.go:96] Skipping create...Using existing machine configuration
	I0229 01:53:36.999359  172338 fix.go:54] fixHost starting: 
	I0229 01:53:36.999756  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:53:36.999804  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:53:37.014484  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0229 01:53:37.014854  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:53:37.015358  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:53:37.015380  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:53:37.015794  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:53:37.016017  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:37.016186  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:53:37.017841  172338 fix.go:102] recreateIfNeeded on newest-cni-133807: state=Stopped err=<nil>
	I0229 01:53:37.017866  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	W0229 01:53:37.018024  172338 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 01:53:37.019758  172338 out.go:177] * Restarting existing kvm2 VM for "newest-cni-133807" ...
	I0229 01:53:35.187854  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:37.188009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:35.706584  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:38.207259  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:36.771905  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.771929  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:36.771974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:36.795209  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.795242  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:36.795305  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:36.818025  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.818055  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:36.818111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:36.845202  170748 logs.go:276] 0 containers: []
	W0229 01:53:36.845228  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:36.845238  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:36.845249  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:36.863710  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:36.863746  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:36.941560  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:36.941585  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:36.941599  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:36.985345  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:36.985374  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:37.049297  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:37.049331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:39.600693  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:39.614787  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:39.637491  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.637520  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:39.637579  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:39.655913  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.655934  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:39.655974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:39.673860  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.673884  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:39.673948  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:39.694282  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.694306  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:39.694362  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:39.713273  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.713298  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:39.713354  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:39.738601  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.738637  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:39.738694  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:39.757911  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.757946  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:39.758003  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:39.785844  170748 logs.go:276] 0 containers: []
	W0229 01:53:39.785876  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:39.785889  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:39.785923  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:39.890021  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:39.890046  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:39.890063  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:39.946696  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:39.946738  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:40.011265  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:40.011294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:40.061033  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:40.061066  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:37.020899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Start
	I0229 01:53:37.021060  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring networks are active...
	I0229 01:53:37.021715  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network default is active
	I0229 01:53:37.022109  172338 main.go:141] libmachine: (newest-cni-133807) Ensuring network mk-newest-cni-133807 is active
	I0229 01:53:37.022542  172338 main.go:141] libmachine: (newest-cni-133807) Getting domain xml...
	I0229 01:53:37.023299  172338 main.go:141] libmachine: (newest-cni-133807) Creating domain...
	I0229 01:53:38.239149  172338 main.go:141] libmachine: (newest-cni-133807) Waiting to get IP...
	I0229 01:53:38.240362  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.240876  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.240965  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.240868  172372 retry.go:31] will retry after 275.310864ms: waiting for machine to come up
	I0229 01:53:38.517440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.518160  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.518185  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.518111  172372 retry.go:31] will retry after 317.329288ms: waiting for machine to come up
	I0229 01:53:38.836647  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:38.837248  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:38.837276  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:38.837187  172372 retry.go:31] will retry after 392.589727ms: waiting for machine to come up
	I0229 01:53:39.231732  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.232246  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.232285  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.232194  172372 retry.go:31] will retry after 424.503594ms: waiting for machine to come up
	I0229 01:53:39.658948  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:39.659654  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:39.659681  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:39.659612  172372 retry.go:31] will retry after 509.777965ms: waiting for machine to come up
	I0229 01:53:40.171487  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.172122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.172152  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.172076  172372 retry.go:31] will retry after 742.622621ms: waiting for machine to come up
	I0229 01:53:40.915896  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:40.916440  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:40.916470  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:40.916388  172372 retry.go:31] will retry after 749.503001ms: waiting for machine to come up
	I0229 01:53:41.667865  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:41.668416  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:41.668460  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:41.668341  172372 retry.go:31] will retry after 899.624948ms: waiting for machine to come up
	I0229 01:53:39.686755  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:41.687219  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:40.705623  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.708440  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:42.579474  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:42.594968  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:42.614588  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.614619  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:42.614678  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:42.633590  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.633626  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:42.633675  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:42.650641  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.650670  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:42.650725  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:42.667825  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.667848  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:42.667896  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:42.687222  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.687250  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:42.687306  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:42.707192  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.707221  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:42.707283  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:42.727815  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.727842  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:42.727909  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:42.747315  170748 logs.go:276] 0 containers: []
	W0229 01:53:42.747344  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:42.747358  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:42.747373  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:42.835128  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.835153  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:42.835166  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:42.878670  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:42.878704  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:42.938260  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:42.938295  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:42.988986  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:42.989023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.504852  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:45.519775  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:45.544878  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.544907  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:45.544956  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:45.564358  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.564392  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:45.564452  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:45.585154  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.585184  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:45.585248  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:45.605709  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.605739  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:45.605811  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:45.623803  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.623890  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:45.623962  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:45.643133  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.643164  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:45.643234  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:45.661762  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.661802  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:45.661861  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:45.680592  170748 logs.go:276] 0 containers: []
	W0229 01:53:45.680620  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:45.680634  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:45.680649  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:45.745642  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:45.745700  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:45.823069  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:45.823109  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:45.892445  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:45.892486  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:45.910297  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:45.910333  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:45.990129  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:42.569261  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:42.569902  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:42.569929  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:42.569879  172372 retry.go:31] will retry after 1.844906669s: waiting for machine to come up
	I0229 01:53:44.416650  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:44.417122  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:44.417147  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:44.417082  172372 retry.go:31] will retry after 1.668166694s: waiting for machine to come up
	I0229 01:53:46.086877  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:46.087409  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:46.087439  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:46.087360  172372 retry.go:31] will retry after 2.357310139s: waiting for machine to come up
	I0229 01:53:44.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:46.187804  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.689109  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:45.205820  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:47.207153  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:49.207534  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:48.491272  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:48.505184  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:48.525599  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.525629  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:48.525706  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:48.546500  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.546532  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:48.546594  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:48.568626  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.568658  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:48.568721  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:48.587381  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.587414  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:48.587473  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:48.605940  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.605978  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:48.606036  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:48.627862  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.627939  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:48.627990  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:48.647290  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.647337  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:48.647409  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:48.668387  170748 logs.go:276] 0 containers: []
	W0229 01:53:48.668421  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:48.668436  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:48.668465  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:48.749495  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:48.749564  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:48.768497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:48.768537  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:48.851955  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:48.851986  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:48.852007  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:48.897006  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:48.897051  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:51.469648  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:51.483142  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:51.505315  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.505336  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:51.505382  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:51.527266  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.527291  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:51.527349  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:51.549665  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.549695  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:51.549762  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:51.567017  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.567048  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:51.567115  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:51.584257  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.584283  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:51.584330  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:51.601100  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.601120  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:51.601162  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:51.617334  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.617364  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:51.617412  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:51.634847  170748 logs.go:276] 0 containers: []
	W0229 01:53:51.634870  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:51.634884  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:51.634906  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:51.699822  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:51.699852  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:51.699874  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:51.748726  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:51.748767  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:48.446918  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:48.447458  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:48.447486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:48.447405  172372 retry.go:31] will retry after 3.5649966s: waiting for machine to come up
	I0229 01:53:50.690417  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:53.186096  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.706757  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:54.207589  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:51.821091  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:51.821125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:51.870732  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:51.870762  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.385901  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:54.399480  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:54.417966  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.417996  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:54.418059  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:54.436602  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.436625  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:54.436671  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:54.454846  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.454871  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:54.454929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:54.475020  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.475052  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:54.475106  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:54.492090  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.492124  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:54.492179  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:54.508529  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.508552  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:54.508612  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:54.525505  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.525532  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:54.525592  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:54.542182  170748 logs.go:276] 0 containers: []
	W0229 01:53:54.542205  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:54.542217  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:54.542231  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:54.591034  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:54.591075  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:54.607014  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:54.607059  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:54.673259  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:54.673277  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:54.673294  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:54.735883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:54.735933  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:53:52.015966  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:52.016461  172338 main.go:141] libmachine: (newest-cni-133807) DBG | unable to find current IP address of domain newest-cni-133807 in network mk-newest-cni-133807
	I0229 01:53:52.016486  172338 main.go:141] libmachine: (newest-cni-133807) DBG | I0229 01:53:52.016421  172372 retry.go:31] will retry after 3.221741445s: waiting for machine to come up
	I0229 01:53:55.241903  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242455  172338 main.go:141] libmachine: (newest-cni-133807) Found IP for machine: 192.168.50.38
	I0229 01:53:55.242486  172338 main.go:141] libmachine: (newest-cni-133807) Reserving static IP address...
	I0229 01:53:55.242513  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has current primary IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.242953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.242982  172338 main.go:141] libmachine: (newest-cni-133807) Reserved static IP address: 192.168.50.38
	I0229 01:53:55.243002  172338 main.go:141] libmachine: (newest-cni-133807) DBG | skip adding static IP to network mk-newest-cni-133807 - found existing host DHCP lease matching {name: "newest-cni-133807", mac: "52:54:00:2f:31:1d", ip: "192.168.50.38"}
	I0229 01:53:55.243021  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Getting to WaitForSSH function...
	I0229 01:53:55.243051  172338 main.go:141] libmachine: (newest-cni-133807) Waiting for SSH to be available...
	I0229 01:53:55.245263  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245602  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.245635  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.245719  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH client type: external
	I0229 01:53:55.245756  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa (-rw-------)
	I0229 01:53:55.245815  172338 main.go:141] libmachine: (newest-cni-133807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.38 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 01:53:55.245837  172338 main.go:141] libmachine: (newest-cni-133807) DBG | About to run SSH command:
	I0229 01:53:55.245849  172338 main.go:141] libmachine: (newest-cni-133807) DBG | exit 0
	I0229 01:53:55.365823  172338 main.go:141] libmachine: (newest-cni-133807) DBG | SSH cmd err, output: <nil>: 
	I0229 01:53:55.366165  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetConfigRaw
	I0229 01:53:55.366733  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.369039  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369334  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.369365  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.369634  172338 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/config.json ...
	I0229 01:53:55.369878  172338 machine.go:88] provisioning docker machine ...
	I0229 01:53:55.369899  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.370074  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370280  172338 buildroot.go:166] provisioning hostname "newest-cni-133807"
	I0229 01:53:55.370305  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.370476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.372352  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372683  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.372714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.372826  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.373050  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373221  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.373397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.373545  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.373765  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.373801  172338 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-133807 && echo "newest-cni-133807" | sudo tee /etc/hostname
	I0229 01:53:55.501380  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-133807
	
	I0229 01:53:55.501425  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.504532  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.504925  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.504953  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.505203  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.505442  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.505829  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.505993  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.506180  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.506197  172338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-133807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-133807/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-133807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 01:53:55.627363  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 01:53:55.627403  172338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
	I0229 01:53:55.627445  172338 buildroot.go:174] setting up certificates
	I0229 01:53:55.627465  172338 provision.go:83] configureAuth start
	I0229 01:53:55.627478  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetMachineName
	I0229 01:53:55.627799  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:55.630746  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631187  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.631216  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.631361  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.633714  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634069  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.634098  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.634214  172338 provision.go:138] copyHostCerts
	I0229 01:53:55.634269  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
	I0229 01:53:55.634288  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
	I0229 01:53:55.634356  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
	I0229 01:53:55.634447  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
	I0229 01:53:55.634455  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
	I0229 01:53:55.634478  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
	I0229 01:53:55.634526  172338 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
	I0229 01:53:55.634534  172338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
	I0229 01:53:55.634553  172338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
	I0229 01:53:55.634601  172338 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.newest-cni-133807 san=[192.168.50.38 192.168.50.38 localhost 127.0.0.1 minikube newest-cni-133807]
	I0229 01:53:55.739651  172338 provision.go:172] copyRemoteCerts
	I0229 01:53:55.739705  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 01:53:55.739730  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.742433  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.742797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.742821  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.743006  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.743211  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.743367  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.743503  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:55.825143  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 01:53:55.850150  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 01:53:55.873623  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 01:53:55.897271  172338 provision.go:86] duration metric: configureAuth took 269.790188ms
	I0229 01:53:55.897298  172338 buildroot.go:189] setting minikube options for container-runtime
	I0229 01:53:55.897528  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:53:55.897558  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:55.897880  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:55.900413  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900726  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:55.900754  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:55.900862  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:55.901029  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:55.901378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:55.901575  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:55.901796  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:55.901811  172338 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 01:53:56.003790  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 01:53:56.003817  172338 buildroot.go:70] root file system type: tmpfs
	I0229 01:53:56.003960  172338 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 01:53:56.003989  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.006912  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007266  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.007291  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.007470  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.007629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007793  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.007997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.008184  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.008354  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.008418  172338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 01:53:56.124499  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 01:53:56.124533  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:56.127457  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127793  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:56.127829  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:56.127968  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:56.128151  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128308  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:56.128498  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:56.128680  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:56.128833  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:56.128852  172338 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 01:53:55.187275  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.189486  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:56.706921  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:59.205557  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:53:57.106913  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 01:53:57.106944  172338 machine.go:91] provisioned docker machine in 1.737051901s
	I0229 01:53:57.106958  172338 start.go:300] post-start starting for "newest-cni-133807" (driver="kvm2")
	I0229 01:53:57.106971  172338 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 01:53:57.106987  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.107348  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 01:53:57.107378  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.109947  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110278  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.110306  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.110419  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.110655  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.110847  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.110998  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.195254  172338 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 01:53:57.199660  172338 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 01:53:57.199686  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
	I0229 01:53:57.199749  172338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
	I0229 01:53:57.199861  172338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
	I0229 01:53:57.199978  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 01:53:57.211667  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:53:57.236009  172338 start.go:303] post-start completed in 129.030126ms
	I0229 01:53:57.236038  172338 fix.go:56] fixHost completed within 20.236678345s
	I0229 01:53:57.236066  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.239097  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239405  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.239428  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.239632  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.239810  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.239990  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.240135  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.240351  172338 main.go:141] libmachine: Using SSH client type: native
	I0229 01:53:57.240577  172338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.50.38 22 <nil> <nil>}
	I0229 01:53:57.240592  172338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 01:53:57.347803  172338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709171637.329083069
	
	I0229 01:53:57.347829  172338 fix.go:206] guest clock: 1709171637.329083069
	I0229 01:53:57.347839  172338 fix.go:219] Guest: 2024-02-29 01:53:57.329083069 +0000 UTC Remote: 2024-02-29 01:53:57.236042976 +0000 UTC m=+20.403256492 (delta=93.040093ms)
	I0229 01:53:57.347867  172338 fix.go:190] guest clock delta is within tolerance: 93.040093ms
	I0229 01:53:57.347875  172338 start.go:83] releasing machines lock for "newest-cni-133807", held for 20.348533837s
	I0229 01:53:57.347898  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.348162  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:53:57.350842  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351284  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.351312  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.351648  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352219  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352485  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:53:57.352599  172338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 01:53:57.352685  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.352765  172338 ssh_runner.go:195] Run: cat /version.json
	I0229 01:53:57.352801  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:53:57.355935  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356331  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356570  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.356571  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.356764  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.356906  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:53:57.356923  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.356930  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:53:57.357085  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:53:57.357144  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.357257  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:53:57.357402  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:53:57.357558  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:53:57.439867  172338 ssh_runner.go:195] Run: systemctl --version
	I0229 01:53:57.461722  172338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 01:53:57.469492  172338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 01:53:57.469553  172338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 01:53:57.488804  172338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 01:53:57.488832  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.488972  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.510573  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 01:53:57.522254  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 01:53:57.533175  172338 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 01:53:57.533265  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 01:53:57.544648  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.556155  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 01:53:57.568806  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 01:53:57.579441  172338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 01:53:57.591000  172338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 01:53:57.602790  172338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 01:53:57.612548  172338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 01:53:57.622708  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:57.774983  172338 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 01:53:57.803366  172338 start.go:475] detecting cgroup driver to use...
	I0229 01:53:57.803462  172338 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 01:53:57.819377  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.835552  172338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 01:53:57.855766  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 01:53:57.870321  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.882616  172338 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 01:53:57.906767  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 01:53:57.919519  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 01:53:57.937892  172338 ssh_runner.go:195] Run: which cri-dockerd
	I0229 01:53:57.941557  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 01:53:57.950404  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 01:53:57.966732  172338 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 01:53:58.084501  172338 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 01:53:58.208172  172338 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 01:53:58.208327  172338 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 01:53:58.231616  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:53:58.339214  172338 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 01:53:59.877873  172338 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.53860785s)
	I0229 01:53:59.877980  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 01:53:59.892601  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:53:59.908111  172338 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 01:54:00.026741  172338 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 01:54:00.150989  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.270596  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 01:54:00.292845  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 01:54:00.310771  172338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 01:54:00.442177  172338 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 01:54:00.520800  172338 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 01:54:00.520874  172338 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 01:54:00.527623  172338 start.go:543] Will wait 60s for crictl version
	I0229 01:54:00.527683  172338 ssh_runner.go:195] Run: which crictl
	I0229 01:54:00.532463  172338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 01:54:00.599208  172338 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 01:54:00.599291  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.627562  172338 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 01:54:00.655024  172338 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 01:54:00.655069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetIP
	I0229 01:54:00.658010  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658343  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:00.658372  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:00.658608  172338 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0229 01:54:00.662943  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.679113  172338 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 01:53:57.304118  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:53:57.317352  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:53:57.334647  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.334674  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:53:57.334724  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:53:57.354591  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.354620  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:53:57.354664  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:53:57.378535  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.378558  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:53:57.378613  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:53:57.398944  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.398973  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:53:57.399019  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:53:57.419479  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.419500  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:53:57.419544  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:53:57.435860  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.435888  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:53:57.435942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:53:57.453347  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.453383  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:53:57.453430  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:53:57.473140  170748 logs.go:276] 0 containers: []
	W0229 01:53:57.473168  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:53:57.473182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:53:57.473196  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:53:57.526048  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:53:57.526079  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:53:57.541246  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:53:57.541271  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:53:57.616011  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:53:57.616037  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:53:57.616052  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:53:57.658815  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:53:57.658856  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.228028  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:00.242250  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:00.260188  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.260217  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:00.260277  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:00.279694  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.279717  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:00.279768  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:00.300245  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.300276  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:00.300331  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:00.321402  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.321423  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:00.321484  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:00.341221  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.341252  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:00.341309  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:00.359202  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.359228  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:00.359274  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:00.377486  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.377515  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:00.377566  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:00.396751  170748 logs.go:276] 0 containers: []
	W0229 01:54:00.396780  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:00.396792  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:00.396804  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:00.411321  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:00.411354  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:00.486044  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:00.486070  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:00.486086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:00.533467  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:00.533493  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:00.601400  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:00.601429  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:00.680518  172338 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 01:54:00.680595  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.699558  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.699582  172338 docker.go:615] Images already preloaded, skipping extraction
	I0229 01:54:00.699651  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:00.720362  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:00.720382  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:00.720435  172338 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 01:54:00.750538  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:00.750564  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:00.750582  172338 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 01:54:00.750604  172338 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.38 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-133807 NodeName:newest-cni-133807 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.38"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.38 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 01:54:00.750845  172338 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.38
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-133807"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.38
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.38"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 01:54:00.750974  172338 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-133807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.38
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 01:54:00.751053  172338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 01:54:00.763338  172338 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 01:54:00.763421  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 01:54:00.774930  172338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (421 bytes)
	I0229 01:54:00.795559  172338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 01:54:00.816378  172338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0229 01:54:00.836392  172338 ssh_runner.go:195] Run: grep 192.168.50.38	control-plane.minikube.internal$ /etc/hosts
	I0229 01:54:00.841301  172338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.38	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 01:54:00.855335  172338 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807 for IP: 192.168.50.38
	I0229 01:54:00.855370  172338 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:00.855555  172338 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
	I0229 01:54:00.855595  172338 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
	I0229 01:54:00.855699  172338 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/client.key
	I0229 01:54:00.855776  172338 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key.01da567d
	I0229 01:54:00.855837  172338 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key
	I0229 01:54:00.856003  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
	W0229 01:54:00.856056  172338 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
	I0229 01:54:00.856071  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 01:54:00.856107  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
	I0229 01:54:00.856141  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
	I0229 01:54:00.856172  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
	I0229 01:54:00.856231  172338 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
	I0229 01:54:00.856935  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 01:54:00.884304  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 01:54:00.909114  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 01:54:00.932767  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/newest-cni-133807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 01:54:00.957174  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 01:54:00.982424  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0229 01:54:01.005673  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 01:54:01.029470  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 01:54:01.056951  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
	I0229 01:54:01.080261  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 01:54:01.104850  172338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
	I0229 01:54:01.128318  172338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 01:54:01.145321  172338 ssh_runner.go:195] Run: openssl version
	I0229 01:54:01.150792  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
	I0229 01:54:01.162288  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166729  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.166774  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
	I0229 01:54:01.172237  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
	I0229 01:54:01.183583  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
	I0229 01:54:01.195364  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199820  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.199890  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
	I0229 01:54:01.205840  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 01:54:01.217694  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 01:54:01.229231  172338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233770  172338 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.233841  172338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 01:54:01.239419  172338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 01:54:01.250900  172338 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 01:54:01.255351  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 01:54:01.261364  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 01:54:01.267843  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 01:54:01.273917  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 01:54:01.279780  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 01:54:01.285722  172338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 01:54:01.295181  172338 kubeadm.go:404] StartCluster: {Name:newest-cni-133807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-133807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false sy
stem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 01:54:01.295318  172338 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:01.327657  172338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 01:54:01.340602  172338 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 01:54:01.340626  172338 kubeadm.go:636] restartCluster start
	I0229 01:54:01.340676  172338 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 01:54:01.351659  172338 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.352394  172338 kubeconfig.go:135] verify returned: extract IP: "newest-cni-133807" does not appear in /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:01.352778  172338 kubeconfig.go:146] "newest-cni-133807" context is missing from /home/jenkins/minikube-integration/18063-115328/kubeconfig - will repair!
	I0229 01:54:01.353471  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:01.354935  172338 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 01:54:01.365295  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.365346  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.379525  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:01.866175  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:01.866250  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:01.880632  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:53:59.689914  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.694344  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:01.208129  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.705473  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:03.160372  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:03.174216  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:03.193976  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.193997  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:03.194047  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:03.212210  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.212237  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:03.212282  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:03.229155  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.229178  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:03.229223  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:03.248201  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.248224  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:03.248287  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:03.267884  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.267908  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:03.267952  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:03.287746  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.287770  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:03.287821  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:03.306938  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.306967  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:03.307016  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:03.326486  170748 logs.go:276] 0 containers: []
	W0229 01:54:03.326519  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:03.326534  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:03.326549  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:03.395132  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:03.395184  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:03.412879  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:03.412913  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:03.482097  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:03.482120  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:03.482132  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:03.525422  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:03.525455  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.083568  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:06.096663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:06.114370  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.114400  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:06.114445  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:06.131116  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.131136  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:06.131180  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:06.147183  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.147206  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:06.147261  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:06.163312  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.163335  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:06.163381  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:06.180224  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.180248  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:06.180302  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:06.197599  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.197627  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:06.197682  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:06.215691  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.215711  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:06.215756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:06.232575  170748 logs.go:276] 0 containers: []
	W0229 01:54:06.232594  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:06.232606  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:06.232619  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:06.274143  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:06.274169  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:06.333535  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:06.333568  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:06.385263  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:06.385291  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:06.399965  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:06.399998  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:06.462490  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:02.365814  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.365888  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.381326  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:02.865848  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:02.865928  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:02.881269  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.365397  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.365478  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.380922  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:03.865482  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:03.865596  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:03.879430  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.366070  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.366183  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.381485  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.866086  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:04.866191  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:04.879535  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.366159  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.366268  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.379573  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:05.865791  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:05.865883  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:05.881058  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.365561  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.365642  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.379122  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:06.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:06.865926  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:06.879810  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:04.186274  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:06.187331  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.687316  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:05.705984  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:07.706819  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:08.962748  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:08.979756  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:08.996761  170748 logs.go:276] 0 containers: []
	W0229 01:54:08.996786  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:08.996840  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:09.020061  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.020088  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:09.020144  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:09.042548  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.042578  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:09.042633  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:09.072428  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.072461  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:09.072525  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:09.089193  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.089216  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:09.089262  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:09.107143  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.107170  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:09.107220  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:09.125208  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.125228  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:09.125268  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:09.143488  170748 logs.go:276] 0 containers: []
	W0229 01:54:09.143511  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:09.143522  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:09.143535  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:09.214360  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:09.214382  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:09.214395  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:09.256462  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:09.256492  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:09.312362  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:09.312392  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:09.362596  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:09.362630  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:07.365617  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.365729  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.379799  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:07.865347  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:07.865455  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:07.879417  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.366028  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.366123  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.380127  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:08.865702  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:08.865849  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:08.880014  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.365550  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.365632  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.382898  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:09.865431  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:09.865510  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:09.879281  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.365768  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.365864  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.380308  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:10.865845  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:10.865941  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:10.879469  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.366107  172338 api_server.go:166] Checking apiserver status ...
	I0229 01:54:11.366212  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 01:54:11.380134  172338 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:54:11.380168  172338 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 01:54:11.380204  172338 kubeadm.go:1135] stopping kube-system containers ...
	I0229 01:54:11.380272  172338 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 01:54:11.400551  172338 docker.go:483] Stopping containers: [b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292]
	I0229 01:54:11.400620  172338 ssh_runner.go:195] Run: docker stop b97b0102f58d a657aef5edb8 69945f0b8e5a fda60cf34615 1c2980f6901d 2d2cce1364cd 9cff337f44d3 6a80e3b3c5d9 e640fc811093 ade36214d42e ca8eb20e62a8 55324cad79aa 7479ee594672 cbca27468292
	I0229 01:54:11.420276  172338 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 01:54:11.442755  172338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:11.452745  172338 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:11.452816  172338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462724  172338 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:11.462746  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:11.576479  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:10.687632  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:13.188979  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:09.707636  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:12.206349  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:14.206598  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:11.880988  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:11.894918  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:11.915749  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.915777  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:11.915837  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:11.933269  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.933295  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:11.933388  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:11.950460  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.950483  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:11.950530  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:11.966919  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.966943  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:11.967004  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:11.987487  170748 logs.go:276] 0 containers: []
	W0229 01:54:11.987519  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:11.987602  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:12.011234  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.011265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:12.011324  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:12.039057  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.039083  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:12.039140  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:12.062016  170748 logs.go:276] 0 containers: []
	W0229 01:54:12.062047  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:12.062061  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:12.062078  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.116706  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:12.116744  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:12.176126  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:12.176156  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:12.234175  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:12.234210  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:12.249559  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:12.249597  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:12.321806  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:14.822521  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.837453  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:14.857687  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.857723  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:14.857804  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:14.879933  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.879966  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:14.880025  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:14.903296  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.903334  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:14.903477  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:14.924603  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.924635  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:14.924697  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:14.943135  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.943159  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:14.943218  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:14.961231  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.961265  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:14.961326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:14.993744  170748 logs.go:276] 0 containers: []
	W0229 01:54:14.993786  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:14.993857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:15.013656  170748 logs.go:276] 0 containers: []
	W0229 01:54:15.013686  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:15.013700  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:15.013714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:15.092540  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:15.092576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:15.162362  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:15.162406  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:15.178584  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:15.178612  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:15.256534  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:15.256560  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:15.256576  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:12.722918  172338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.146406214s)
	I0229 01:54:12.722946  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:12.927585  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.040907  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:13.139301  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:13.139384  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:13.640506  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.139790  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.640206  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:14.663070  172338 api_server.go:72] duration metric: took 1.523766735s to wait for apiserver process to appear ...
	I0229 01:54:14.663104  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:14.663126  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:14.663675  172338 api_server.go:269] stopped: https://192.168.50.38:8443/healthz: Get "https://192.168.50.38:8443/healthz": dial tcp 192.168.50.38:8443: connect: connection refused
	I0229 01:54:15.163277  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:15.190654  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.686359  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:16.207410  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:18.705701  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:17.942183  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.942214  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:17.942230  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:17.987284  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 01:54:17.987321  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 01:54:18.163519  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.168857  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.168891  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:18.663488  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:18.668213  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:18.668238  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.163425  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.171029  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 01:54:19.171065  172338 api_server.go:103] status: https://192.168.50.38:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 01:54:19.664211  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:19.668342  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:19.675820  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:19.675849  172338 api_server.go:131] duration metric: took 5.012736256s to wait for apiserver health ...
	I0229 01:54:19.675858  172338 cni.go:84] Creating CNI manager for ""
	I0229 01:54:19.675869  172338 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 01:54:19.677686  172338 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 01:54:19.678985  172338 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 01:54:19.690408  172338 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 01:54:19.711239  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:19.720671  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:19.720701  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:19.720709  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:19.720715  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:19.720723  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:19.720731  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 01:54:19.720736  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:19.720741  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:19.720761  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:19.720767  172338 system_pods.go:74] duration metric: took 9.509631ms to wait for pod list to return data ...
	I0229 01:54:19.720776  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:19.724321  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:19.724346  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:19.724358  172338 node_conditions.go:105] duration metric: took 3.577361ms to run NodePressure ...
	I0229 01:54:19.724376  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 01:54:20.003533  172338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 01:54:20.017015  172338 ops.go:34] apiserver oom_adj: -16
	I0229 01:54:20.017041  172338 kubeadm.go:640] restartCluster took 18.676407847s
	I0229 01:54:20.017053  172338 kubeadm.go:406] StartCluster complete in 18.721880164s
	I0229 01:54:20.017075  172338 settings.go:142] acquiring lock: {Name:mk324b2a181b324166fa2d8da3ad5d1101ca0339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.017158  172338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 01:54:20.018872  172338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/kubeconfig: {Name:mk21fc34ec5e2a9f1bc37fcc8d970f71352c84fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 01:54:20.019139  172338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 01:54:20.019351  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019320  172338 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 01:54:20.019413  172338 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-133807"
	I0229 01:54:20.019429  172338 addons.go:69] Setting default-storageclass=true in profile "newest-cni-133807"
	I0229 01:54:20.019437  172338 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-133807"
	W0229 01:54:20.019445  172338 addons.go:243] addon storage-provisioner should already be in state true
	I0229 01:54:20.019445  172338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-133807"
	I0229 01:54:20.019429  172338 cache.go:107] acquiring lock: {Name:mkf83f87b4b5efd9201d385629e40dc6af5715f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 01:54:20.019496  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019509  172338 cache.go:115] /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 01:54:20.019520  172338 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 106.029µs
	I0229 01:54:20.019530  172338 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 01:54:20.019528  172338 addons.go:69] Setting metrics-server=true in profile "newest-cni-133807"
	I0229 01:54:20.019539  172338 cache.go:87] Successfully saved all images to host disk.
	I0229 01:54:20.019551  172338 addons.go:234] Setting addon metrics-server=true in "newest-cni-133807"
	W0229 01:54:20.019561  172338 addons.go:243] addon metrics-server should already be in state true
	I0229 01:54:20.019604  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.019735  172338 config.go:182] Loaded profile config "newest-cni-133807": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.019930  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.019895  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020002  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020042  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020045  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020109  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.020138  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.020260  172338 addons.go:69] Setting dashboard=true in profile "newest-cni-133807"
	I0229 01:54:20.020302  172338 addons.go:234] Setting addon dashboard=true in "newest-cni-133807"
	W0229 01:54:20.020310  172338 addons.go:243] addon dashboard should already be in state true
	I0229 01:54:20.020476  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.020937  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.021009  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.029773  172338 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-133807" context rescaled to 1 replicas
	I0229 01:54:20.029823  172338 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.38 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 01:54:20.031663  172338 out.go:177] * Verifying Kubernetes components...
	I0229 01:54:20.033048  172338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:20.041914  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41751
	I0229 01:54:20.041918  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34537
	I0229 01:54:20.041966  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0229 01:54:20.041928  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0229 01:54:20.042220  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40429
	I0229 01:54:20.042451  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042454  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.042924  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043005  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043019  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043030  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043044  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.043051  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043098  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043401  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043418  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043428  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.043543  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043555  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.043558  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.043567  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.044095  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044134  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.044332  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044374  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044404  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044425  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.044925  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.044970  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.045173  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045201  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.045588  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.045633  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.047760  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.047785  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.049100  172338 addons.go:234] Setting addon default-storageclass=true in "newest-cni-133807"
	W0229 01:54:20.049123  172338 addons.go:243] addon default-storageclass should already be in state true
	I0229 01:54:20.049152  172338 host.go:66] Checking if "newest-cni-133807" exists ...
	I0229 01:54:20.049548  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.049584  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.064541  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0229 01:54:20.065017  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065158  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I0229 01:54:20.065470  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.065736  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065747  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.065986  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.065997  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.066225  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066313  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.066403  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.066481  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.068564  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0229 01:54:20.068997  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.069067  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.069072  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.071190  172338 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 01:54:20.069506  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.072655  172338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 01:54:20.072680  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.074227  172338 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.074244  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 01:54:20.074265  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.072649  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 01:54:20.074288  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 01:54:20.074310  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.074704  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.074919  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.075229  172338 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 01:54:20.075252  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.078346  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.079734  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.079764  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080050  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.080073  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.080476  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080531  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.080805  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.080854  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.081053  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081112  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.081357  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081683  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.081913  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0229 01:54:20.082210  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.082371  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082386  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0229 01:54:20.082793  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.082934  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.082954  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.083003  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083017  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083155  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.083315  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.083325  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.083372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.083400  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083505  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.083661  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.083828  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.083874  172338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:54:20.083905  172338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:54:20.084097  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.085520  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.087522  172338 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 01:54:20.088944  172338 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 01:54:17.803447  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:17.818754  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:17.838257  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.838289  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:17.838351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:17.859095  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.859128  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:17.859188  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:17.880186  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.880219  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:17.880281  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:17.905367  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.905415  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:17.905476  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:17.926888  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.926913  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:17.926974  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:17.948858  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.948884  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:17.948941  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:17.967835  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.967871  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:17.967930  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:17.999903  170748 logs.go:276] 0 containers: []
	W0229 01:54:17.999935  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:17.999949  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:17.999963  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:18.066021  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:18.066065  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:18.091596  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:18.091621  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:18.167407  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:18.167429  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:18.167444  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:18.212978  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:18.213013  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:20.785493  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.802351  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:20.825685  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.825720  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:20.825770  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:20.849013  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.849043  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:20.849111  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:20.871166  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.871198  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:20.871249  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:20.889932  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.889963  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:20.890022  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:20.912390  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.912416  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:20.912492  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:20.931206  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.931233  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:20.931291  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:20.949663  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.949687  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:20.949739  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:20.967249  170748 logs.go:276] 0 containers: []
	W0229 01:54:20.967277  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:20.967288  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:20.967299  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:21.062400  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:21.062428  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:21.062445  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:21.113883  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:21.113924  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:21.180620  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:21.180659  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:21.236555  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:21.236589  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:20.090259  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 01:54:20.090273  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 01:54:20.090286  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.092728  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093153  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.093186  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.093317  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.093479  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.093618  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.093732  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.118803  172338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0229 01:54:20.119213  172338 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:54:20.119796  172338 main.go:141] libmachine: Using API Version  1
	I0229 01:54:20.119825  172338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:54:20.120194  172338 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:54:20.120440  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetState
	I0229 01:54:20.121995  172338 main.go:141] libmachine: (newest-cni-133807) Calling .DriverName
	I0229 01:54:20.122309  172338 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.122327  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 01:54:20.122352  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHHostname
	I0229 01:54:20.124725  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125104  172338 main.go:141] libmachine: (newest-cni-133807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:31:1d", ip: ""} in network mk-newest-cni-133807: {Iface:virbr2 ExpiryTime:2024-02-29 02:53:48 +0000 UTC Type:0 Mac:52:54:00:2f:31:1d Iaid: IPaddr:192.168.50.38 Prefix:24 Hostname:newest-cni-133807 Clientid:01:52:54:00:2f:31:1d}
	I0229 01:54:20.125126  172338 main.go:141] libmachine: (newest-cni-133807) DBG | domain newest-cni-133807 has defined IP address 192.168.50.38 and MAC address 52:54:00:2f:31:1d in network mk-newest-cni-133807
	I0229 01:54:20.125372  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHPort
	I0229 01:54:20.125513  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHKeyPath
	I0229 01:54:20.125629  172338 main.go:141] libmachine: (newest-cni-133807) Calling .GetSSHUsername
	I0229 01:54:20.125721  172338 sshutil.go:53] new ssh client: &{IP:192.168.50.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/newest-cni-133807/id_rsa Username:docker}
	I0229 01:54:20.333837  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 01:54:20.333867  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 01:54:20.365581  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 01:54:20.365605  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 01:54:20.387559  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 01:54:20.387585  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 01:54:20.391190  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 01:54:20.394118  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 01:54:20.442370  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 01:54:20.442407  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 01:54:20.466973  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 01:54:20.467005  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 01:54:20.489843  172338 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 01:54:20.489843  172338 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:54:20.489919  172338 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 01:54:20.489940  172338 cache_images.go:84] Images are preloaded, skipping loading
	I0229 01:54:20.489947  172338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:20.489953  172338 cache_images.go:262] succeeded pushing to: newest-cni-133807
	I0229 01:54:20.489960  172338 cache_images.go:263] failed pushing to: 
	I0229 01:54:20.489991  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490005  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490309  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490327  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.490335  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:20.490342  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:20.490620  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:20.490605  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:20.490643  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:20.507250  172338 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.507271  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 01:54:20.529738  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 01:54:20.572814  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 01:54:20.572836  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 01:54:20.614903  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 01:54:20.614929  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 01:54:20.698112  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 01:54:20.698133  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 01:54:20.767402  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 01:54:20.767429  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 01:54:20.833849  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 01:54:20.833880  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 01:54:20.894077  172338 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:20.894100  172338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 01:54:20.947725  172338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 01:54:21.834822  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.440658264s)
	I0229 01:54:21.834862  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.443647567s)
	I0229 01:54:21.834881  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.834882  172338 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.344911071s)
	I0229 01:54:21.834935  172338 api_server.go:72] duration metric: took 1.805074704s to wait for apiserver process to appear ...
	I0229 01:54:21.834954  172338 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:54:21.834975  172338 api_server.go:253] Checking apiserver healthz at https://192.168.50.38:8443/healthz ...
	I0229 01:54:21.834886  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835069  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.834904  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835393  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835415  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835425  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835429  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835443  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835456  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.835468  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835479  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835433  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.835847  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835856  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.835859  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.835862  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835868  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.835874  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.843384  172338 api_server.go:279] https://192.168.50.38:8443/healthz returned 200:
	ok
	I0229 01:54:21.844033  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.844056  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.844319  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:21.844354  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.844370  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.844766  172338 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:54:21.844804  172338 api_server.go:131] duration metric: took 9.827817ms to wait for apiserver health ...
	I0229 01:54:21.844815  172338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:54:21.851946  172338 system_pods.go:59] 8 kube-system pods found
	I0229 01:54:21.851980  172338 system_pods.go:61] "coredns-76f75df574-mmkfr" [f879cc8d-803d-4ef7-b0e2-2a910b2894c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 01:54:21.851990  172338 system_pods.go:61] "etcd-newest-cni-133807" [6d03a967-5928-428c-9e4e-a42887fcca2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 01:54:21.852004  172338 system_pods.go:61] "kube-apiserver-newest-cni-133807" [24293d8a-1562-49a0-a361-d2847499e2c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 01:54:21.852013  172338 system_pods.go:61] "kube-controller-manager-newest-cni-133807" [34d5dfb1-989b-4f5b-a340-d252328cab81] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 01:54:21.852024  172338 system_pods.go:61] "kube-proxy-ckzl4" [cbfe78c3-7173-48dc-b187-5cb98306de47] Running
	I0229 01:54:21.852032  172338 system_pods.go:61] "kube-scheduler-newest-cni-133807" [f5482e87-1e31-49b9-a145-817d8266502f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 01:54:21.852042  172338 system_pods.go:61] "metrics-server-57f55c9bc5-zxm8h" [d3e7d9d1-e461-460b-bd08-90121b6617ca] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:54:21.852052  172338 system_pods.go:61] "storage-provisioner" [1089443a-7361-4936-a03d-f05d8f000c1f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 01:54:21.852063  172338 system_pods.go:74] duration metric: took 7.238252ms to wait for pod list to return data ...
	I0229 01:54:21.852075  172338 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:54:21.855974  172338 default_sa.go:45] found service account: "default"
	I0229 01:54:21.856003  172338 default_sa.go:55] duration metric: took 3.916391ms for default service account to be created ...
	I0229 01:54:21.856020  172338 kubeadm.go:581] duration metric: took 1.826163486s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 01:54:21.856046  172338 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:54:21.858351  172338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:54:21.858367  172338 node_conditions.go:123] node cpu capacity is 2
	I0229 01:54:21.858377  172338 node_conditions.go:105] duration metric: took 2.326102ms to run NodePressure ...
	I0229 01:54:21.858387  172338 start.go:228] waiting for startup goroutines ...
	I0229 01:54:21.896983  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.367194081s)
	I0229 01:54:21.897048  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897070  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897356  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897372  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897386  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:21.897397  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:21.897669  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:21.897686  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:21.897701  172338 addons.go:470] Verifying addon metrics-server=true in "newest-cni-133807"
	I0229 01:54:22.315002  172338 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.367214151s)
	I0229 01:54:22.315099  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315119  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315448  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315472  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315488  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.315512  172338 main.go:141] libmachine: Making call to close driver server
	I0229 01:54:22.315524  172338 main.go:141] libmachine: (newest-cni-133807) Calling .Close
	I0229 01:54:22.315797  172338 main.go:141] libmachine: (newest-cni-133807) DBG | Closing plugin on server side
	I0229 01:54:22.315830  172338 main.go:141] libmachine: Successfully made call to close driver server
	I0229 01:54:22.315843  172338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 01:54:22.317416  172338 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-133807 addons enable metrics-server
	
	I0229 01:54:22.318943  172338 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0229 01:54:22.320494  172338 addons.go:505] enable addons completed in 2.301194216s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0229 01:54:22.320539  172338 start.go:233] waiting for cluster config update ...
	I0229 01:54:22.320554  172338 start.go:242] writing updated cluster config ...
	I0229 01:54:22.320879  172338 ssh_runner.go:195] Run: rm -f paused
	I0229 01:54:22.378739  172338 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:54:22.380459  172338 out.go:177] * Done! kubectl is now configured to use "newest-cni-133807" cluster and "default" namespace by default
	I0229 01:54:19.687767  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:21.689355  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:20.707480  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:22.707979  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:23.754280  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:23.768586  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:23.793150  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.793172  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:23.793221  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:23.818865  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.818896  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:23.818949  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:23.838078  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.838105  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:23.838161  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:23.859213  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.859235  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:23.859279  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:23.878876  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.878901  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:23.878938  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:23.899317  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.899340  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:23.899387  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:23.916826  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.916851  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:23.916891  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:23.933713  170748 logs.go:276] 0 containers: []
	W0229 01:54:23.933739  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:23.933752  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:23.933766  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:24.003099  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:24.003136  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:24.021001  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:24.021038  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:24.097013  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:24.097035  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:24.097050  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:24.145682  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:24.145714  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:26.710373  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:26.724077  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:26.740532  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.740556  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:26.740603  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:24.187991  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.188081  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:28.688297  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:24.708094  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:27.205437  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:29.206577  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:26.758229  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.758251  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:26.758294  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:26.774881  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.774904  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:26.774971  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:26.790893  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.790913  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:26.790953  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:26.807273  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.807300  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:26.807359  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:26.824081  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.824107  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:26.824165  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:26.840770  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.840793  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:26.840851  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:26.856932  170748 logs.go:276] 0 containers: []
	W0229 01:54:26.856966  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:26.856980  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:26.856995  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:26.907299  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:26.907331  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:26.922552  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:26.922585  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:26.999079  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:26.999109  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:26.999125  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:27.051061  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:27.051098  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:29.607727  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:29.622929  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:29.641829  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.641861  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:29.641932  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:29.658732  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.658761  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:29.658825  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:29.676597  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.676619  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:29.676663  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:29.695001  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.695030  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:29.695089  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:29.711947  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.711982  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:29.712038  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:29.728832  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.728860  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:29.728925  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:29.744888  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.744907  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:29.744951  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:29.761144  170748 logs.go:276] 0 containers: []
	W0229 01:54:29.761169  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:29.761182  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:29.761192  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:29.810791  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:29.810823  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:29.824497  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:29.824527  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:29.890825  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:29.890849  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:29.890865  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:29.934980  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:29.935023  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:31.187022  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.686489  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:31.210173  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:33.705583  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:32.508161  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:32.523715  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:54:32.541751  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.541796  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:54:32.541860  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:54:32.559746  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.559772  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:54:32.559826  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:54:32.578867  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.578890  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:54:32.578942  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:54:32.596025  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.596050  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:54:32.596104  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:54:32.613250  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.613277  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:54:32.613326  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:54:32.629760  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.629808  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:54:32.629867  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:54:32.646940  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.646962  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:54:32.647034  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:54:32.666140  170748 logs.go:276] 0 containers: []
	W0229 01:54:32.666167  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:54:32.666180  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:54:32.666194  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:54:32.718171  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:54:32.718206  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:54:32.732695  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:54:32.732720  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:54:32.796621  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:54:32.796642  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:54:32.796657  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:54:32.839872  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:54:32.839908  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:54:35.396632  170748 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:54:35.412053  170748 kubeadm.go:640] restartCluster took 4m11.905401704s
	W0229 01:54:35.412153  170748 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 01:54:35.412183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:54:35.838651  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:54:35.854409  170748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 01:54:35.865129  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:54:35.875642  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:54:35.875696  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:54:36.022349  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:54:36.059938  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:54:36.131386  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:54:36.188327  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.686993  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:36.207432  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:38.706396  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.687792  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.188499  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:40.708268  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:43.206459  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.686549  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.689009  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:45.705669  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:47.705839  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.187643  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.193029  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:50.205484  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:52.205628  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.205895  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:54.686931  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:57.185865  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:56.206104  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:58.707011  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:54:59.186948  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:01.188066  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.687015  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:00.709471  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:03.205172  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:06.187463  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:08.686768  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:05.206413  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:07.706024  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:11.187247  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:13.686761  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:10.205156  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:12.205766  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:15.688395  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:18.186256  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:14.705829  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:17.206857  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:20.186585  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:22.186702  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:19.704997  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:21.706261  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:23.707958  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:24.187221  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.187591  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.687260  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:26.206739  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:28.705765  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.687620  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:32.688592  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:30.706982  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:33.208209  169202 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.692999  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:37.189729  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:34.705863  169202 pod_ready.go:81] duration metric: took 4m0.00680066s waiting for pod "metrics-server-57f55c9bc5-nhrls" in "kube-system" namespace to be "Ready" ...
	E0229 01:55:34.705886  169202 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:55:34.705893  169202 pod_ready.go:38] duration metric: took 4m1.59715045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:55:34.705912  169202 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:55:34.705982  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:34.727306  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:34.727390  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:34.745657  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:34.745730  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:34.763604  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:34.763681  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:34.784535  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:34.784611  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:34.802288  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:34.802358  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:34.821502  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:34.821576  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:34.838522  169202 logs.go:276] 0 containers: []
	W0229 01:55:34.838548  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:34.838600  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:34.855799  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:34.855896  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:34.872982  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:34.873012  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:34.873023  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:34.895617  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:34.895647  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:34.915617  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:34.915645  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:34.989082  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:34.989112  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:35.017467  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:35.017495  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:35.046564  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:35.046591  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:35.068469  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:35.068499  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:35.098606  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:35.098636  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:35.125553  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:35.125589  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:35.171952  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:35.171993  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:35.233201  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:35.233241  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:35.291798  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.292005  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298118  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.298323  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.321468  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:35.321511  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:35.338552  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:35.338582  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:35.453569  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453597  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:35.453663  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:35.453677  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453687  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453703  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:35.453716  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:35.453727  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:35.453740  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:39.687296  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:42.187476  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:44.189760  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:46.686245  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:48.687170  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:45.455294  169202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:55:45.470848  169202 api_server.go:72] duration metric: took 4m14.039378333s to wait for apiserver process to appear ...
	I0229 01:55:45.470876  169202 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:55:45.470953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:45.489614  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:45.489694  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:45.507881  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:45.507953  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:45.540532  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:45.540609  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:45.560035  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:45.560134  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:45.579280  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:45.579376  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:45.597768  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:45.597865  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:45.618789  169202 logs.go:276] 0 containers: []
	W0229 01:55:45.618814  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:45.618860  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:45.638075  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:45.638159  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:45.656571  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:45.656611  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:45.656627  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:45.686218  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:45.686254  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:45.709338  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:45.709370  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:45.755652  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:45.755689  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:45.822848  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:45.822883  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:45.879421  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.879584  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885205  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:45.885368  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:45.906780  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:45.906805  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:45.922651  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:45.922688  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:45.956685  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:45.956715  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:45.980079  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:45.980108  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:46.000800  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:46.000828  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:46.059443  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:46.059478  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:46.157674  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:46.157708  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:46.179678  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:46.179710  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:46.225916  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.225953  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:46.226025  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:46.226043  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226051  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226062  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:46.226068  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:46.226077  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:46.226084  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:55:51.187510  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:53.686827  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.187244  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:58.686099  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:55:56.228095  169202 api_server.go:253] Checking apiserver healthz at https://192.168.39.152:8443/healthz ...
	I0229 01:55:56.232840  169202 api_server.go:279] https://192.168.39.152:8443/healthz returned 200:
	ok
	I0229 01:55:56.233957  169202 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 01:55:56.233979  169202 api_server.go:131] duration metric: took 10.763095955s to wait for apiserver health ...
	I0229 01:55:56.233988  169202 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:55:56.234055  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:55:56.257140  169202 logs.go:276] 1 containers: [cb940569c0e2]
	I0229 01:55:56.257221  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:55:56.286172  169202 logs.go:276] 1 containers: [b4c574728e3d]
	I0229 01:55:56.286263  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:55:56.305014  169202 logs.go:276] 1 containers: [71270c4a21ca]
	I0229 01:55:56.305084  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:55:56.326712  169202 logs.go:276] 1 containers: [a0c568ce6510]
	I0229 01:55:56.326787  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:55:56.347079  169202 logs.go:276] 1 containers: [b0c5df9eb349]
	I0229 01:55:56.347145  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:55:56.367625  169202 logs.go:276] 1 containers: [3b76a45c517c]
	I0229 01:55:56.367692  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:55:56.385387  169202 logs.go:276] 0 containers: []
	W0229 01:55:56.385431  169202 logs.go:278] No container was found matching "kindnet"
	I0229 01:55:56.385480  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:55:56.403032  169202 logs.go:276] 1 containers: [583e1e06af11]
	I0229 01:55:56.403097  169202 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:55:56.422016  169202 logs.go:276] 1 containers: [65ad300e66f5]
	I0229 01:55:56.422055  169202 logs.go:123] Gathering logs for coredns [71270c4a21ca] ...
	I0229 01:55:56.422072  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71270c4a21ca"
	I0229 01:55:56.444017  169202 logs.go:123] Gathering logs for kube-scheduler [a0c568ce6510] ...
	I0229 01:55:56.444045  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c568ce6510"
	I0229 01:55:56.473118  169202 logs.go:123] Gathering logs for kube-controller-manager [3b76a45c517c] ...
	I0229 01:55:56.473151  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b76a45c517c"
	I0229 01:55:56.518781  169202 logs.go:123] Gathering logs for storage-provisioner [583e1e06af11] ...
	I0229 01:55:56.518819  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 583e1e06af11"
	I0229 01:55:56.542772  169202 logs.go:123] Gathering logs for kubelet ...
	I0229 01:55:56.542814  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:55:56.604186  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.604348  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611644  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:56.611847  169202 logs.go:138] Found kubelet problem: Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:56.635056  169202 logs.go:123] Gathering logs for dmesg ...
	I0229 01:55:56.635088  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:55:56.649472  169202 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:55:56.649496  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:55:56.763663  169202 logs.go:123] Gathering logs for etcd [b4c574728e3d] ...
	I0229 01:55:56.763696  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4c574728e3d"
	I0229 01:55:56.793607  169202 logs.go:123] Gathering logs for Docker ...
	I0229 01:55:56.793638  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:55:56.857562  169202 logs.go:123] Gathering logs for container status ...
	I0229 01:55:56.857597  169202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:55:56.924313  169202 logs.go:123] Gathering logs for kube-apiserver [cb940569c0e2] ...
	I0229 01:55:56.924343  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb940569c0e2"
	I0229 01:55:56.962407  169202 logs.go:123] Gathering logs for kube-proxy [b0c5df9eb349] ...
	I0229 01:55:56.962436  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0c5df9eb349"
	I0229 01:55:56.985427  169202 logs.go:123] Gathering logs for kubernetes-dashboard [65ad300e66f5] ...
	I0229 01:55:56.985458  169202 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65ad300e66f5"
	I0229 01:55:57.007649  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007675  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:55:57.007729  169202 out.go:239] X Problems detected in kubelet:
	W0229 01:55:57.007740  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: W0229 01:51:30.836698    9947 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007748  169202 out.go:239]   Feb 29 01:51:30 no-preload-449532 kubelet[9947]: E0229 01:51:30.836742    9947 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007760  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: W0229 01:51:33.997649    9947 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	W0229 01:55:57.007769  169202 out.go:239]   Feb 29 01:51:33 no-preload-449532 kubelet[9947]: E0229 01:51:33.997680    9947 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-449532" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-449532' and this object
	I0229 01:55:57.007777  169202 out.go:304] Setting ErrFile to fd 2...
	I0229 01:55:57.007785  169202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:00.687363  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:03.187734  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.019205  169202 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:07.019240  169202 system_pods.go:61] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.019246  169202 system_pods.go:61] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.019252  169202 system_pods.go:61] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.019257  169202 system_pods.go:61] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.019262  169202 system_pods.go:61] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.019266  169202 system_pods.go:61] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.019275  169202 system_pods.go:61] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.019281  169202 system_pods.go:61] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.019292  169202 system_pods.go:74] duration metric: took 10.78529776s to wait for pod list to return data ...
	I0229 01:56:07.019300  169202 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:07.021795  169202 default_sa.go:45] found service account: "default"
	I0229 01:56:07.021822  169202 default_sa.go:55] duration metric: took 2.513891ms for default service account to be created ...
	I0229 01:56:07.021833  169202 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:07.027968  169202 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:07.027991  169202 system_pods.go:89] "coredns-76f75df574-4wqm6" [8fa483e1-d296-44b2-bbfd-33d05fc5a60a] Running
	I0229 01:56:07.027999  169202 system_pods.go:89] "etcd-no-preload-449532" [f17159b7-bce9-49ed-abbb-1e611272d97a] Running
	I0229 01:56:07.028006  169202 system_pods.go:89] "kube-apiserver-no-preload-449532" [0bca03b9-8c72-4b7e-8acd-1b4a86223be1] Running
	I0229 01:56:07.028012  169202 system_pods.go:89] "kube-controller-manager-no-preload-449532" [4b764321-ae51-45ea-9fab-454a891c6e7d] Running
	I0229 01:56:07.028021  169202 system_pods.go:89] "kube-proxy-5vg9d" [80cfceef-8234-4a14-a209-230e1c603a29] Running
	I0229 01:56:07.028028  169202 system_pods.go:89] "kube-scheduler-no-preload-449532" [1252cbd9-b954-43bf-ad7b-4bf647ab41c9] Running
	I0229 01:56:07.028044  169202 system_pods.go:89] "metrics-server-57f55c9bc5-nhrls" [98d7836d-f417-4c30-b42c-8e391b927b7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:07.028053  169202 system_pods.go:89] "storage-provisioner" [5ef78531-9cc9-4345-bb0e-436a8c0bf8aa] Running
	I0229 01:56:07.028065  169202 system_pods.go:126] duration metric: took 6.224923ms to wait for k8s-apps to be running ...
	I0229 01:56:07.028076  169202 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:07.028144  169202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:07.043579  169202 system_svc.go:56] duration metric: took 15.495808ms WaitForService to wait for kubelet.
	I0229 01:56:07.043608  169202 kubeadm.go:581] duration metric: took 4m35.612143208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:07.043638  169202 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:07.046428  169202 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:07.046447  169202 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:07.046457  169202 node_conditions.go:105] duration metric: took 2.814262ms to run NodePressure ...
	I0229 01:56:07.046469  169202 start.go:228] waiting for startup goroutines ...
	I0229 01:56:07.046475  169202 start.go:233] waiting for cluster config update ...
	I0229 01:56:07.046485  169202 start.go:242] writing updated cluster config ...
	I0229 01:56:07.046741  169202 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:07.095609  169202 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 01:56:07.097736  169202 out.go:177] * Done! kubectl is now configured to use "no-preload-449532" cluster and "default" namespace by default
	I0229 01:56:05.188374  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:07.188627  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:09.688264  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:12.188346  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:14.686751  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:16.687139  169852 pod_ready.go:102] pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace has status "Ready":"False"
	I0229 01:56:18.187973  169852 pod_ready.go:81] duration metric: took 4m0.008139239s waiting for pod "metrics-server-57f55c9bc5-pvkcg" in "kube-system" namespace to be "Ready" ...
	E0229 01:56:18.187998  169852 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 01:56:18.188006  169852 pod_ready.go:38] duration metric: took 4m0.805438302s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 01:56:18.188024  169852 api_server.go:52] waiting for apiserver process to appear ...
	I0229 01:56:18.188086  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:18.208854  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:18.208946  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:18.227659  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:18.227750  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:18.246475  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:18.246552  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:18.268583  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:18.268661  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:18.287872  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:18.287962  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:18.306446  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:18.306527  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:18.325914  169852 logs.go:276] 0 containers: []
	W0229 01:56:18.325943  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:18.325996  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:18.345838  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:18.345948  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:18.365691  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:18.365744  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:18.365763  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:18.390529  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:18.390558  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:18.441681  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:18.441715  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:18.521769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:18.522020  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:18.546113  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:18.546149  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:18.564900  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:18.564934  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:18.713864  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:18.713900  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:18.751902  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:18.752004  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:18.798480  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:18.798507  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:18.845423  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:18.845452  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:18.873120  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:18.873144  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:18.898180  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:18.898209  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:18.920066  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:18.920097  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:18.991663  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:18.991695  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:19.060048  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060079  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:19.060145  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:19.060170  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:19.060184  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:19.060198  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:19.060209  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.235880  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:56:32.236029  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:56:32.238423  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:56:32.238502  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:56:32.238599  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:56:32.238744  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:56:32.238904  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:56:32.239073  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:56:32.239200  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:56:32.239271  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:56:32.239350  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:56:32.241126  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:56:32.241192  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:56:32.241251  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:56:32.241317  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:56:32.241394  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:56:32.241469  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:56:32.241523  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:56:32.241605  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:56:32.241700  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:56:32.241811  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:56:32.241921  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:56:32.242001  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:56:32.242081  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:56:32.242164  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:56:32.242247  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:56:32.242344  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:56:32.242427  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:56:32.242484  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:56:29.061463  169852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:56:29.077717  169852 api_server.go:72] duration metric: took 4m14.467720845s to wait for apiserver process to appear ...
	I0229 01:56:29.077739  169852 api_server.go:88] waiting for apiserver healthz status ...
	I0229 01:56:29.077840  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:29.096876  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:29.096961  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:29.114345  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:29.114423  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:29.131634  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:29.131705  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:29.149068  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:29.149139  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:29.166411  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:29.166483  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:29.182906  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:29.182982  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:29.199536  169852 logs.go:276] 0 containers: []
	W0229 01:56:29.199556  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:29.199599  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:29.218889  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:29.218951  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:29.237207  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:29.237245  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:29.237258  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:29.253233  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:29.253267  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:29.379843  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:29.379871  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:29.411795  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:29.411822  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:29.438557  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:29.438583  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:29.459479  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:29.459505  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:29.507590  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:29.507620  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:29.573263  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.573453  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.595549  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:29.595574  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:29.637026  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:29.637058  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:29.658572  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:29.658603  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:29.683814  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:29.683844  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:29.705482  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:29.705511  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:29.768497  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:29.768531  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:29.836247  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836270  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:29.836320  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:29.836331  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:29.836339  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:29.836350  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:29.836360  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:32.244633  170748 out.go:204]   - Booting up control plane ...
	I0229 01:56:32.244727  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:56:32.244807  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:56:32.244884  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:56:32.244992  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:56:32.245189  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:56:32.245267  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:56:32.245360  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245532  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245599  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245746  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.245826  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.245998  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246093  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246273  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246359  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:56:32.246574  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:56:32.246588  170748 kubeadm.go:322] 
	I0229 01:56:32.246630  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:56:32.246679  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:56:32.246693  170748 kubeadm.go:322] 
	I0229 01:56:32.246740  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:56:32.246791  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:56:32.246892  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:56:32.246905  170748 kubeadm.go:322] 
	I0229 01:56:32.247026  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:56:32.247072  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:56:32.247116  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:56:32.247124  170748 kubeadm.go:322] 
	I0229 01:56:32.247212  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:56:32.247289  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:56:32.247361  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:56:32.247406  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:56:32.247488  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:56:32.247541  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0229 01:56:32.247677  170748 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 01:56:32.247732  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 01:56:32.689675  170748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:32.704123  170748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 01:56:32.713829  170748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 01:56:32.713881  170748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 01:56:32.847290  170748 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 01:56:32.879658  170748 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 01:56:32.959513  170748 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 01:56:39.838133  169852 api_server.go:253] Checking apiserver healthz at https://192.168.72.56:8444/healthz ...
	I0229 01:56:39.843637  169852 api_server.go:279] https://192.168.72.56:8444/healthz returned 200:
	ok
	I0229 01:56:39.844896  169852 api_server.go:141] control plane version: v1.28.4
	I0229 01:56:39.844921  169852 api_server.go:131] duration metric: took 10.767174552s to wait for apiserver health ...
	I0229 01:56:39.844930  169852 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 01:56:39.845005  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:56:39.867188  169852 logs.go:276] 1 containers: [4d9fe800e019]
	I0229 01:56:39.867264  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:56:39.890265  169852 logs.go:276] 1 containers: [31461fa1a3f3]
	I0229 01:56:39.890345  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:56:39.911540  169852 logs.go:276] 1 containers: [a93fc1606563]
	I0229 01:56:39.911617  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:56:39.939266  169852 logs.go:276] 1 containers: [5bca153c0117]
	I0229 01:56:39.939340  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:56:39.957270  169852 logs.go:276] 1 containers: [60e3f6ea23fc]
	I0229 01:56:39.957337  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:56:39.974956  169852 logs.go:276] 1 containers: [58cf3fc8b5ee]
	I0229 01:56:39.975025  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:56:39.991794  169852 logs.go:276] 0 containers: []
	W0229 01:56:39.991815  169852 logs.go:278] No container was found matching "kindnet"
	I0229 01:56:39.991856  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 01:56:40.009143  169852 logs.go:276] 1 containers: [10e5bfa7b350]
	I0229 01:56:40.009208  169852 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:56:40.026359  169852 logs.go:276] 1 containers: [479c213bcb60]
	I0229 01:56:40.026392  169852 logs.go:123] Gathering logs for kube-proxy [60e3f6ea23fc] ...
	I0229 01:56:40.026406  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60e3f6ea23fc"
	I0229 01:56:40.046944  169852 logs.go:123] Gathering logs for storage-provisioner [10e5bfa7b350] ...
	I0229 01:56:40.046969  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10e5bfa7b350"
	I0229 01:56:40.067580  169852 logs.go:123] Gathering logs for kubernetes-dashboard [479c213bcb60] ...
	I0229 01:56:40.067604  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 479c213bcb60"
	I0229 01:56:40.091791  169852 logs.go:123] Gathering logs for Docker ...
	I0229 01:56:40.091812  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:56:40.151587  169852 logs.go:123] Gathering logs for kubelet ...
	I0229 01:56:40.151619  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 01:56:40.221769  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.221978  169852 logs.go:138] Found kubelet problem: Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.247432  169852 logs.go:123] Gathering logs for etcd [31461fa1a3f3] ...
	I0229 01:56:40.247466  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31461fa1a3f3"
	I0229 01:56:40.283196  169852 logs.go:123] Gathering logs for coredns [a93fc1606563] ...
	I0229 01:56:40.283227  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a93fc1606563"
	I0229 01:56:40.305677  169852 logs.go:123] Gathering logs for kube-scheduler [5bca153c0117] ...
	I0229 01:56:40.305703  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bca153c0117"
	I0229 01:56:40.333975  169852 logs.go:123] Gathering logs for container status ...
	I0229 01:56:40.334003  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 01:56:40.402520  169852 logs.go:123] Gathering logs for dmesg ...
	I0229 01:56:40.402558  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:56:40.418892  169852 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:56:40.418926  169852 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 01:56:40.537554  169852 logs.go:123] Gathering logs for kube-apiserver [4d9fe800e019] ...
	I0229 01:56:40.537597  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d9fe800e019"
	I0229 01:56:40.576026  169852 logs.go:123] Gathering logs for kube-controller-manager [58cf3fc8b5ee] ...
	I0229 01:56:40.576067  169852 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58cf3fc8b5ee"
	I0229 01:56:40.622017  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622055  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 01:56:40.622123  169852 out.go:239] X Problems detected in kubelet:
	W0229 01:56:40.622137  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: W0229 01:52:18.057295    9872 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	W0229 01:56:40.622147  169852 out.go:239]   Feb 29 01:52:18 default-k8s-diff-port-308557 kubelet[9872]: E0229 01:52:18.057453    9872 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-308557" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-308557' and this object
	I0229 01:56:40.622165  169852 out.go:304] Setting ErrFile to fd 2...
	I0229 01:56:40.622178  169852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:56:50.632890  169852 system_pods.go:59] 8 kube-system pods found
	I0229 01:56:50.632919  169852 system_pods.go:61] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.632924  169852 system_pods.go:61] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.632929  169852 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.632933  169852 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.632936  169852 system_pods.go:61] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.632939  169852 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.632944  169852 system_pods.go:61] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.632948  169852 system_pods.go:61] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.632955  169852 system_pods.go:74] duration metric: took 10.788019346s to wait for pod list to return data ...
	I0229 01:56:50.632961  169852 default_sa.go:34] waiting for default service account to be created ...
	I0229 01:56:50.636262  169852 default_sa.go:45] found service account: "default"
	I0229 01:56:50.636279  169852 default_sa.go:55] duration metric: took 3.313291ms for default service account to be created ...
	I0229 01:56:50.636292  169852 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 01:56:50.641677  169852 system_pods.go:86] 8 kube-system pods found
	I0229 01:56:50.641698  169852 system_pods.go:89] "coredns-5dd5756b68-4zvwl" [d003c4f3-b873-4069-8dfc-294c23dac6ce] Running
	I0229 01:56:50.641704  169852 system_pods.go:89] "etcd-default-k8s-diff-port-308557" [3d888d0a-d92b-46a6-8aac-78f084337aae] Running
	I0229 01:56:50.641710  169852 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-308557" [ace534b0-445b-47a0-a2df-9601ce257e16] Running
	I0229 01:56:50.641714  169852 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-308557" [7044a688-c16d-4bc9-b79f-cca357ed58fa] Running
	I0229 01:56:50.641718  169852 system_pods.go:89] "kube-proxy-lkcrl" [8dd6771f-1354-4dbb-9489-6fa1908a7d89] Running
	I0229 01:56:50.641722  169852 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-308557" [d58c5c98-6a03-4264-bc09-deafe558717b] Running
	I0229 01:56:50.641730  169852 system_pods.go:89] "metrics-server-57f55c9bc5-pvkcg" [54f69e0f-cf68-4aad-aa01-c657b5c99b7c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 01:56:50.641736  169852 system_pods.go:89] "storage-provisioner" [06401443-f89a-4271-8643-18ecb453a8c0] Running
	I0229 01:56:50.641743  169852 system_pods.go:126] duration metric: took 5.445558ms to wait for k8s-apps to be running ...
	I0229 01:56:50.641749  169852 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 01:56:50.641806  169852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:56:50.660446  169852 system_svc.go:56] duration metric: took 18.690637ms WaitForService to wait for kubelet.
	I0229 01:56:50.660469  169852 kubeadm.go:581] duration metric: took 4m36.05047851s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 01:56:50.660486  169852 node_conditions.go:102] verifying NodePressure condition ...
	I0229 01:56:50.663507  169852 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 01:56:50.663526  169852 node_conditions.go:123] node cpu capacity is 2
	I0229 01:56:50.663537  169852 node_conditions.go:105] duration metric: took 3.04635ms to run NodePressure ...
	I0229 01:56:50.663547  169852 start.go:228] waiting for startup goroutines ...
	I0229 01:56:50.663552  169852 start.go:233] waiting for cluster config update ...
	I0229 01:56:50.663561  169852 start.go:242] writing updated cluster config ...
	I0229 01:56:50.663826  169852 ssh_runner.go:195] Run: rm -f paused
	I0229 01:56:50.710751  169852 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 01:56:50.712950  169852 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-308557" cluster and "default" namespace by default
	I0229 01:58:29.528786  170748 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 01:58:29.528884  170748 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 01:58:29.530491  170748 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 01:58:29.530596  170748 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 01:58:29.530680  170748 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 01:58:29.530764  170748 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 01:58:29.530861  170748 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 01:58:29.530964  170748 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 01:58:29.531068  170748 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 01:58:29.531119  170748 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 01:58:29.531176  170748 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 01:58:29.532944  170748 out.go:204]   - Generating certificates and keys ...
	I0229 01:58:29.533047  170748 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 01:58:29.533144  170748 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 01:58:29.533247  170748 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 01:58:29.533305  170748 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 01:58:29.533379  170748 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 01:58:29.533441  170748 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 01:58:29.533506  170748 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 01:58:29.533567  170748 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 01:58:29.533636  170748 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 01:58:29.533700  170748 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 01:58:29.533744  170748 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 01:58:29.533806  170748 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 01:58:29.533878  170748 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 01:58:29.533967  170748 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 01:58:29.534067  170748 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 01:58:29.534153  170748 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 01:58:29.534217  170748 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 01:58:29.535694  170748 out.go:204]   - Booting up control plane ...
	I0229 01:58:29.535778  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 01:58:29.535844  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 01:58:29.535904  170748 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 01:58:29.535972  170748 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 01:58:29.536127  170748 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 01:58:29.536212  170748 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 01:58:29.536285  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536458  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536538  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536729  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.536791  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.536941  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537007  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537189  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537267  170748 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 01:58:29.537495  170748 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 01:58:29.537513  170748 kubeadm.go:322] 
	I0229 01:58:29.537569  170748 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 01:58:29.537626  170748 kubeadm.go:322] 	timed out waiting for the condition
	I0229 01:58:29.537636  170748 kubeadm.go:322] 
	I0229 01:58:29.537685  170748 kubeadm.go:322] This error is likely caused by:
	I0229 01:58:29.537744  170748 kubeadm.go:322] 	- The kubelet is not running
	I0229 01:58:29.537903  170748 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 01:58:29.537915  170748 kubeadm.go:322] 
	I0229 01:58:29.538065  170748 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 01:58:29.538113  170748 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 01:58:29.538174  170748 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 01:58:29.538183  170748 kubeadm.go:322] 
	I0229 01:58:29.538325  170748 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 01:58:29.538450  170748 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 01:58:29.538581  170748 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 01:58:29.538656  170748 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 01:58:29.538743  170748 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 01:58:29.538829  170748 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 01:58:29.538866  170748 kubeadm.go:406] StartCluster complete in 8m6.061536028s
	I0229 01:58:29.538947  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 01:58:29.556117  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.556141  170748 logs.go:278] No container was found matching "kube-apiserver"
	I0229 01:58:29.556205  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 01:58:29.572791  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.572812  170748 logs.go:278] No container was found matching "etcd"
	I0229 01:58:29.572857  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 01:58:29.589544  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.589565  170748 logs.go:278] No container was found matching "coredns"
	I0229 01:58:29.589625  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 01:58:29.605410  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.605426  170748 logs.go:278] No container was found matching "kube-scheduler"
	I0229 01:58:29.605472  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 01:58:29.621393  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.621412  170748 logs.go:278] No container was found matching "kube-proxy"
	I0229 01:58:29.621450  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 01:58:29.637671  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.637690  170748 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 01:58:29.637732  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 01:58:29.653501  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.653533  170748 logs.go:278] No container was found matching "kindnet"
	I0229 01:58:29.653590  170748 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 01:58:29.669033  170748 logs.go:276] 0 containers: []
	W0229 01:58:29.669058  170748 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 01:58:29.669072  170748 logs.go:123] Gathering logs for kubelet ...
	I0229 01:58:29.669086  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 01:58:29.722126  170748 logs.go:123] Gathering logs for dmesg ...
	I0229 01:58:29.722161  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 01:58:29.735919  170748 logs.go:123] Gathering logs for describe nodes ...
	I0229 01:58:29.735946  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 01:58:29.803585  170748 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 01:58:29.803615  170748 logs.go:123] Gathering logs for Docker ...
	I0229 01:58:29.803629  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 01:58:29.843153  170748 logs.go:123] Gathering logs for container status ...
	I0229 01:58:29.843183  170748 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 01:58:29.906091  170748 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 01:58:29.906150  170748 out.go:239] * 
	W0229 01:58:29.906209  170748 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.906231  170748 out.go:239] * 
	W0229 01:58:29.906995  170748 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 01:58:29.910220  170748 out.go:177] 
	W0229 01:58:29.911536  170748 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 01:58:29.911581  170748 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 01:58:29.911600  170748 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 01:58:29.912937  170748 out.go:177] 
	
	
	==> Docker <==
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776150999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776206246Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776256438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776308167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776347865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776476626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776540257Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776622510Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776676461Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776885278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.776965976Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777030325Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777311132Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777539525Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777641426Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 01:50:19 old-k8s-version-096771 dockerd[1059]: time="2024-02-29T01:50:19.777854491Z" level=info msg="containerd successfully booted in 0.034774s"
	Feb 29 01:50:21 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:21.976247648Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.012708683Z" level=info msg="Loading containers: start."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.140588585Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.193875502Z" level=info msg="Loading containers: done."
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209172228Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.209243974Z" level=info msg="Daemon has completed initialization"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241102168Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 01:50:22 old-k8s-version-096771 dockerd[1053]: time="2024-02-29T01:50:22.241236205Z" level=info msg="API listen on [::]:2376"
	Feb 29 01:50:22 old-k8s-version-096771 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-29T02:13:28Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 01:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610762] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb29 01:50] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +2.425571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.914813] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.071671] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054332] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.114259] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.335012] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.127181] systemd-fstab-generator[824]: Ignoring "noauto" option for root device
	[  +0.149601] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +5.311700] systemd-fstab-generator[1044]: Ignoring "noauto" option for root device
	[  +0.076969] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.064548] systemd-fstab-generator[1441]: Ignoring "noauto" option for root device
	[  +0.060768] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 01:54] systemd-fstab-generator[9475]: Ignoring "noauto" option for root device
	[  +0.059471] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 01:56] systemd-fstab-generator[11246]: Ignoring "noauto" option for root device
	[  +0.070220] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 02:13:28 up 23 min,  0 users,  load average: 0.00, 0.06, 0.12
	Linux old-k8s-version-096771 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1352.
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: I0229 02:13:27.781978   25675 server.go:410] Version: v1.16.0
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: I0229 02:13:27.782136   25675 plugins.go:100] No cloud provider specified.
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: I0229 02:13:27.782145   25675 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: I0229 02:13:27.784302   25675 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: W0229 02:13:27.785150   25675 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: W0229 02:13:27.785258   25675 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 02:13:27 old-k8s-version-096771 kubelet[25675]: F0229 02:13:27.785286   25675 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:13:27 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 02:13:28 old-k8s-version-096771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1353.
	Feb 29 02:13:28 old-k8s-version-096771 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 02:13:28 old-k8s-version-096771 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: I0229 02:13:28.558169   25752 server.go:410] Version: v1.16.0
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: I0229 02:13:28.558361   25752 plugins.go:100] No cloud provider specified.
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: I0229 02:13:28.558371   25752 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: I0229 02:13:28.560842   25752 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: W0229 02:13:28.561703   25752 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: W0229 02:13:28.562547   25752 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 02:13:28 old-k8s-version-096771 kubelet[25752]: F0229 02:13:28.562645   25752 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 02:13:28 old-k8s-version-096771 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 02:13:28 old-k8s-version-096771 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 2 (237.175981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-096771" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (356.04s)

                                                
                                    

Test pass (292/332)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.53
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 4.54
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.37
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 68.02
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 150.18
38 TestAddons/parallel/Registry 14.99
39 TestAddons/parallel/Ingress 28.74
40 TestAddons/parallel/InspektorGadget 11.77
41 TestAddons/parallel/MetricsServer 6.16
42 TestAddons/parallel/HelmTiller 12.76
44 TestAddons/parallel/CSI 97.05
45 TestAddons/parallel/Headlamp 14.87
46 TestAddons/parallel/CloudSpanner 5.81
47 TestAddons/parallel/LocalPath 55.19
48 TestAddons/parallel/NvidiaDevicePlugin 5.45
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 13.41
54 TestCertOptions 82.28
55 TestCertExpiration 288.38
56 TestDockerFlags 100.66
57 TestForceSystemdFlag 84.72
58 TestForceSystemdEnv 99.69
60 TestKVMDriverInstallOrUpdate 3.84
64 TestErrorSpam/setup 49.09
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.25
68 TestErrorSpam/unpause 1.42
69 TestErrorSpam/stop 12.55
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 63.86
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.73
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.29
81 TestFunctional/serial/CacheCmd/cache/add_local 1.35
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 40.66
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.07
92 TestFunctional/serial/LogsFileCmd 1.07
93 TestFunctional/serial/InvalidService 4.13
95 TestFunctional/parallel/ConfigCmd 0.44
96 TestFunctional/parallel/DashboardCmd 19.79
97 TestFunctional/parallel/DryRun 0.35
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.06
103 TestFunctional/parallel/ServiceCmdConnect 8.81
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 40.25
107 TestFunctional/parallel/SSHCmd 0.55
108 TestFunctional/parallel/CpCmd 1.44
109 TestFunctional/parallel/MySQL 35.16
110 TestFunctional/parallel/FileSync 0.22
111 TestFunctional/parallel/CertSync 1.33
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
119 TestFunctional/parallel/License 0.27
120 TestFunctional/parallel/ServiceCmd/DeployApp 13.24
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
122 TestFunctional/parallel/MountCmd/any-port 10.9
123 TestFunctional/parallel/ProfileCmd/profile_list 0.28
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
125 TestFunctional/parallel/Version/short 0.07
126 TestFunctional/parallel/Version/components 0.66
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
131 TestFunctional/parallel/ImageCommands/ImageBuild 4.47
132 TestFunctional/parallel/ImageCommands/Setup 1.31
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.46
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.74
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.09
136 TestFunctional/parallel/MountCmd/specific-port 1.78
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
138 TestFunctional/parallel/ServiceCmd/List 0.6
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
141 TestFunctional/parallel/ServiceCmd/Format 0.53
142 TestFunctional/parallel/ServiceCmd/URL 0.47
144 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.39
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.01
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.67
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.49
152 TestFunctional/parallel/DockerEnv/bash 0.9
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
156 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
157 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
161 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
162 TestFunctional/delete_addon-resizer_images 0.07
163 TestFunctional/delete_my-image_image 0.01
164 TestFunctional/delete_minikube_cached_images 0.01
165 TestGvisorAddon 319.85
168 TestImageBuild/serial/Setup 45.53
169 TestImageBuild/serial/NormalBuild 1.51
170 TestImageBuild/serial/BuildWithBuildArg 1.01
171 TestImageBuild/serial/BuildWithDockerIgnore 0.39
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
182 TestJSONOutput/start/Command 65.45
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.58
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.56
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 8.11
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.21
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 103.43
214 TestMountStart/serial/StartWithMountFirst 28.67
215 TestMountStart/serial/VerifyMountFirst 0.4
216 TestMountStart/serial/StartWithMountSecond 31.97
217 TestMountStart/serial/VerifyMountSecond 0.39
218 TestMountStart/serial/DeleteFirst 0.69
219 TestMountStart/serial/VerifyMountPostDelete 0.39
220 TestMountStart/serial/Stop 2.1
221 TestMountStart/serial/RestartStopped 24.1
222 TestMountStart/serial/VerifyMountPostStop 0.39
225 TestMultiNode/serial/FreshStart2Nodes 116.94
226 TestMultiNode/serial/DeployApp2Nodes 4.37
227 TestMultiNode/serial/PingHostFrom2Pods 0.89
228 TestMultiNode/serial/AddNode 47.13
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.21
231 TestMultiNode/serial/CopyFile 7.53
232 TestMultiNode/serial/StopNode 3.32
233 TestMultiNode/serial/StartAfterStop 141.39
234 TestMultiNode/serial/RestartKeepsNodes 172.14
235 TestMultiNode/serial/DeleteNode 1.51
236 TestMultiNode/serial/StopMultiNode 25.53
237 TestMultiNode/serial/RestartMultiNode 115.53
238 TestMultiNode/serial/ValidateNameConflict 52.24
243 TestPreload 166.47
245 TestScheduledStopUnix 117.27
246 TestSkaffold 143.79
249 TestRunningBinaryUpgrade 187.47
254 TestPause/serial/Start 92.14
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
268 TestNoKubernetes/serial/StartWithK8s 98.68
269 TestPause/serial/SecondStartNoReconfiguration 74.62
270 TestPause/serial/Pause 0.77
271 TestNoKubernetes/serial/StartWithStopK8s 7.9
272 TestPause/serial/VerifyStatus 0.33
273 TestPause/serial/Unpause 0.73
274 TestPause/serial/PauseAgain 1.14
275 TestPause/serial/DeletePaused 1.14
276 TestPause/serial/VerifyDeletedResources 6.22
277 TestNoKubernetes/serial/Start 29.52
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
279 TestNoKubernetes/serial/ProfileList 74.26
280 TestNoKubernetes/serial/Stop 2.16
281 TestNoKubernetes/serial/StartNoArgs 31.84
289 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
290 TestStoppedBinaryUpgrade/Setup 0.43
291 TestStoppedBinaryUpgrade/Upgrade 192.69
292 TestNetworkPlugins/group/auto/Start 83.32
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
294 TestNetworkPlugins/group/kindnet/Start 82.38
295 TestNetworkPlugins/group/auto/KubeletFlags 0.22
296 TestNetworkPlugins/group/auto/NetCatPod 11.23
297 TestNetworkPlugins/group/auto/DNS 0.26
298 TestNetworkPlugins/group/auto/Localhost 0.19
299 TestNetworkPlugins/group/auto/HairPin 0.2
300 TestNetworkPlugins/group/calico/Start 104.58
301 TestNetworkPlugins/group/custom-flannel/Start 96.89
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
304 TestNetworkPlugins/group/kindnet/NetCatPod 14.26
305 TestNetworkPlugins/group/kindnet/DNS 0.17
306 TestNetworkPlugins/group/kindnet/Localhost 0.16
307 TestNetworkPlugins/group/kindnet/HairPin 0.16
308 TestNetworkPlugins/group/false/Start 77.94
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.26
312 TestNetworkPlugins/group/calico/KubeletFlags 0.24
313 TestNetworkPlugins/group/calico/NetCatPod 13.32
314 TestNetworkPlugins/group/custom-flannel/DNS 0.19
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
317 TestNetworkPlugins/group/calico/DNS 0.19
318 TestNetworkPlugins/group/calico/Localhost 0.15
319 TestNetworkPlugins/group/calico/HairPin 0.15
320 TestNetworkPlugins/group/false/KubeletFlags 0.63
321 TestNetworkPlugins/group/false/NetCatPod 12.27
322 TestNetworkPlugins/group/enable-default-cni/Start 72.45
323 TestNetworkPlugins/group/flannel/Start 109.67
324 TestNetworkPlugins/group/bridge/Start 127.03
325 TestNetworkPlugins/group/false/DNS 0.19
326 TestNetworkPlugins/group/false/Localhost 0.15
327 TestNetworkPlugins/group/false/HairPin 0.15
328 TestNetworkPlugins/group/kubenet/Start 162.82
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
338 TestNetworkPlugins/group/flannel/NetCatPod 13.24
339 TestNetworkPlugins/group/flannel/DNS 0.19
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
341 TestNetworkPlugins/group/flannel/Localhost 0.16
342 TestNetworkPlugins/group/bridge/NetCatPod 12.26
343 TestNetworkPlugins/group/flannel/HairPin 0.18
344 TestNetworkPlugins/group/bridge/DNS 0.23
345 TestNetworkPlugins/group/bridge/Localhost 0.18
346 TestNetworkPlugins/group/bridge/HairPin 0.18
348 TestStartStop/group/no-preload/serial/FirstStart 87.21
350 TestStartStop/group/embed-certs/serial/FirstStart 87.47
351 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
352 TestNetworkPlugins/group/kubenet/NetCatPod 10.38
353 TestNetworkPlugins/group/kubenet/DNS 0.47
354 TestNetworkPlugins/group/kubenet/Localhost 0.15
355 TestNetworkPlugins/group/kubenet/HairPin 0.18
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.51
358 TestStartStop/group/no-preload/serial/DeployApp 8.33
359 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
360 TestStartStop/group/no-preload/serial/Stop 13.18
361 TestStartStop/group/embed-certs/serial/DeployApp 8.36
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
363 TestStartStop/group/embed-certs/serial/Stop 13.16
364 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
365 TestStartStop/group/no-preload/serial/SecondStart 602.8
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
367 TestStartStop/group/embed-certs/serial/SecondStart 340.45
368 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.1
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
372 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 592.28
375 TestStartStop/group/old-k8s-version/serial/Stop 2.18
376 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
381 TestStartStop/group/embed-certs/serial/Pause 2.72
383 TestStartStop/group/newest-cni/serial/FirstStart 70.52
384 TestStartStop/group/newest-cni/serial/DeployApp 0
385 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.89
386 TestStartStop/group/newest-cni/serial/Stop 13.13
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
388 TestStartStop/group/newest-cni/serial/SecondStart 45.94
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
392 TestStartStop/group/newest-cni/serial/Pause 2.62
393 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
395 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
396 TestStartStop/group/no-preload/serial/Pause 2.51
397 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
399 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
400 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
x
+
TestDownloadOnly/v1.16.0/json-events (9.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-758074 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-758074 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (9.530325396s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-758074
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-758074: exit status 85 (73.827621ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-758074 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |          |
	|         | -p download-only-758074        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:46:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:46:20.043498  122607 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:46:20.043651  122607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:20.043661  122607 out.go:304] Setting ErrFile to fd 2...
	I0229 00:46:20.043665  122607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:20.043869  122607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	W0229 00:46:20.044005  122607 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18063-115328/.minikube/config/config.json: open /home/jenkins/minikube-integration/18063-115328/.minikube/config/config.json: no such file or directory
	I0229 00:46:20.044560  122607 out.go:298] Setting JSON to true
	I0229 00:46:20.045440  122607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1731,"bootTime":1709165849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:46:20.045506  122607 start.go:139] virtualization: kvm guest
	I0229 00:46:20.048046  122607 out.go:97] [download-only-758074] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 00:46:20.049520  122607 out.go:169] MINIKUBE_LOCATION=18063
	W0229 00:46:20.048187  122607 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 00:46:20.048256  122607 notify.go:220] Checking for updates...
	I0229 00:46:20.052150  122607 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:46:20.053551  122607 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:46:20.054920  122607 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:46:20.056197  122607 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 00:46:20.058709  122607 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 00:46:20.059014  122607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:46:20.154879  122607 out.go:97] Using the kvm2 driver based on user configuration
	I0229 00:46:20.154912  122607 start.go:299] selected driver: kvm2
	I0229 00:46:20.154921  122607 start.go:903] validating driver "kvm2" against <nil>
	I0229 00:46:20.155299  122607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:46:20.155426  122607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 00:46:20.171568  122607 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 00:46:20.171652  122607 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 00:46:20.172134  122607 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 00:46:20.172271  122607 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 00:46:20.172326  122607 cni.go:84] Creating CNI manager for ""
	I0229 00:46:20.172344  122607 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 00:46:20.172352  122607 start_flags.go:323] config:
	{Name:download-only-758074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-758074 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:46:20.172582  122607 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 00:46:20.174581  122607 out.go:97] Downloading VM boot image ...
	I0229 00:46:20.174630  122607 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 00:46:22.874849  122607 out.go:97] Starting control plane node download-only-758074 in cluster download-only-758074
	I0229 00:46:22.874878  122607 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 00:46:22.897484  122607 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 00:46:22.897517  122607 cache.go:56] Caching tarball of preloaded images
	I0229 00:46:22.897756  122607 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 00:46:22.899483  122607 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 00:46:22.899509  122607 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 00:46:22.923676  122607 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-758074"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-758074
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-748087 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-748087 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (4.541136685s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-748087
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-748087: exit status 85 (72.251131ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-758074 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |                     |
	|         | -p download-only-758074        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| delete  | -p download-only-758074        | download-only-758074 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| start   | -o=json --download-only        | download-only-748087 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |                     |
	|         | -p download-only-748087        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:46:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:46:29.923073  122764 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:46:29.923649  122764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:29.923665  122764 out.go:304] Setting ErrFile to fd 2...
	I0229 00:46:29.923673  122764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:29.924132  122764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 00:46:29.925199  122764 out.go:298] Setting JSON to true
	I0229 00:46:29.926175  122764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1741,"bootTime":1709165849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:46:29.926247  122764 start.go:139] virtualization: kvm guest
	I0229 00:46:29.928235  122764 out.go:97] [download-only-748087] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 00:46:29.929649  122764 out.go:169] MINIKUBE_LOCATION=18063
	I0229 00:46:29.928394  122764 notify.go:220] Checking for updates...
	I0229 00:46:29.932743  122764 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:46:29.934393  122764 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:46:29.935847  122764 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:46:29.937214  122764 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-748087"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-748087
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-568144 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-568144 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 : (4.368091723s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-568144
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-568144: exit status 85 (73.360678ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-758074 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |                     |
	|         | -p download-only-758074           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| delete  | -p download-only-758074           | download-only-758074 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| start   | -o=json --download-only           | download-only-748087 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |                     |
	|         | -p download-only-748087           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| delete  | -p download-only-748087           | download-only-748087 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC | 29 Feb 24 00:46 UTC |
	| start   | -o=json --download-only           | download-only-568144 | jenkins | v1.32.0 | 29 Feb 24 00:46 UTC |                     |
	|         | -p download-only-568144           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 00:46:34
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 00:46:34.808745  122921 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:46:34.809264  122921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:34.809287  122921 out.go:304] Setting ErrFile to fd 2...
	I0229 00:46:34.809295  122921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:46:34.809774  122921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 00:46:34.810898  122921 out.go:298] Setting JSON to true
	I0229 00:46:34.811814  122921 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1746,"bootTime":1709165849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:46:34.811889  122921 start.go:139] virtualization: kvm guest
	I0229 00:46:34.813667  122921 out.go:97] [download-only-568144] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 00:46:34.815118  122921 out.go:169] MINIKUBE_LOCATION=18063
	I0229 00:46:34.813799  122921 notify.go:220] Checking for updates...
	I0229 00:46:34.817638  122921 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:46:34.819101  122921 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:46:34.820537  122921 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:46:34.821799  122921 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-568144"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-568144
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-509008 --alsologtostderr --binary-mirror http://127.0.0.1:45375 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-509008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-509008
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (68.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-653786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-653786 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m6.972805706s)
helpers_test.go:175: Cleaning up "offline-docker-653786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-653786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-653786: (1.048721669s)
--- PASS: TestOffline (68.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391247
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-391247: exit status 85 (63.895689ms)

                                                
                                                
-- stdout --
	* Profile "addons-391247" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391247"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391247
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-391247: exit status 85 (60.979151ms)

                                                
                                                
-- stdout --
	* Profile "addons-391247" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-391247"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (150.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-391247 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-391247 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.176283603s)
--- PASS: TestAddons/Setup (150.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.693772ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-b44n9" [34e644e6-728e-4cd4-96eb-f9f1ff8808df] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006592473s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qlhtm" [5d33b58b-3659-45ff-8a5a-cfc3764ff0e5] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00505158s
addons_test.go:340: (dbg) Run:  kubectl --context addons-391247 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-391247 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-391247 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.26201322s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 ip
2024/02/29 00:49:25 [DEBUG] GET http://192.168.39.57:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.99s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-391247 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context addons-391247 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (5.700483018s)
addons_test.go:232: (dbg) Run:  kubectl --context addons-391247 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-391247 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [da3d50ff-82b0-481b-ac9d-1911d9d4db31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [da3d50ff-82b0-481b-ac9d-1911d9d4db31] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003227424s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-391247 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.57
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-391247 addons disable ingress-dns --alsologtostderr -v=1: (2.012055711s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-391247 addons disable ingress --alsologtostderr -v=1: (7.894288279s)
--- PASS: TestAddons/parallel/Ingress (28.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bmrv9" [d6a1b815-e108-40c4-9010-f182a442b697] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006196287s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391247
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-391247: (5.765504211s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 27.456201ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-5hxrv" [e35cbf06-c538-4e47-a35d-2e7a83f5e4cf] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004180128s
addons_test.go:415: (dbg) Run:  kubectl --context addons-391247 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-391247 addons disable metrics-server --alsologtostderr -v=1: (1.057144315s)
--- PASS: TestAddons/parallel/MetricsServer (6.16s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.995709ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-49qf4" [47a96dff-7fe8-4623-9d44-d945b53b46b1] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005012909s
addons_test.go:473: (dbg) Run:  kubectl --context addons-391247 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-391247 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.97027827s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (97.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 26.796014ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-391247 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-391247 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ef680df8-66ad-4ee0-9831-11b0ea343a69] Pending
helpers_test.go:344: "task-pv-pod" [ef680df8-66ad-4ee0-9831-11b0ea343a69] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ef680df8-66ad-4ee0-9831-11b0ea343a69] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.005083801s
addons_test.go:584: (dbg) Run:  kubectl --context addons-391247 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391247 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-391247 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-391247 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-391247 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-391247 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-391247 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1cd28e6f-1a60-4d23-b0a9-38bfff3fabe0] Pending
helpers_test.go:344: "task-pv-pod-restore" [1cd28e6f-1a60-4d23-b0a9-38bfff3fabe0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1cd28e6f-1a60-4d23-b0a9-38bfff3fabe0] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003825053s
addons_test.go:626: (dbg) Run:  kubectl --context addons-391247 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-391247 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-391247 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-391247 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.682327478s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (97.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-391247 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-391247 --alsologtostderr -v=1: (1.863054723s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-9jnf2" [60a30b5a-bac0-4a15-a8ff-3ebeec4fc77f] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-9jnf2" [60a30b5a-bac0-4a15-a8ff-3ebeec4fc77f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-9jnf2" [60a30b5a-bac0-4a15-a8ff-3ebeec4fc77f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004176431s
--- PASS: TestAddons/parallel/Headlamp (14.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-p7r5l" [177353f4-df72-4e1c-a26b-2478f50d627b] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004598278s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-391247
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-391247 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-391247 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fd7688ed-b2b2-4a9d-b76e-67628fffc68b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fd7688ed-b2b2-4a9d-b76e-67628fffc68b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fd7688ed-b2b2-4a9d-b76e-67628fffc68b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004650254s
addons_test.go:891: (dbg) Run:  kubectl --context addons-391247 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 ssh "cat /opt/local-path-provisioner/pvc-2c1cc55a-8265-449f-80b3-d4337ff57558_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-391247 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-391247 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-391247 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-391247 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.344585018s)
--- PASS: TestAddons/parallel/LocalPath (55.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6gxnc" [e9deeb4f-6c26-4613-b03a-eb295cb7e46e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005553189s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-391247
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-hp49s" [96c1165d-8667-4688-b486-58be551b83b8] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005343421s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-391247 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-391247 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-391247
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-391247: (13.107586449s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-391247
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-391247
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-391247
--- PASS: TestAddons/StoppedEnableDisable (13.41s)

                                                
                                    
x
+
TestCertOptions (82.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-275535 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-275535 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m20.864666516s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-275535 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-275535 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-275535 -- "sudo cat /etc/kubernetes/admin.conf"
E0229 01:37:28.298162  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.303458  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.313815  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.334175  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-275535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-275535
E0229 01:37:28.375346  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.455645  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.616704  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:28.937313  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
--- PASS: TestCertOptions (82.28s)

                                                
                                    
x
+
TestCertExpiration (288.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-725953 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0229 01:37:26.730942  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-725953 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m18.598442187s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-725953 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-725953 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (28.668762313s)
helpers_test.go:175: Cleaning up "cert-expiration-725953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-725953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-725953: (1.115864277s)
--- PASS: TestCertExpiration (288.38s)

                                                
                                    
x
+
TestDockerFlags (100.66s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-555686 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E0229 01:35:23.849315  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:36:04.809655  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-555686 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m39.001551098s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-555686 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-555686 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-555686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-555686
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-555686: (1.130961172s)
--- PASS: TestDockerFlags (100.66s)

                                                
                                    
x
+
TestForceSystemdFlag (84.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-548271 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E0229 01:34:42.886655  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:42.891917  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:42.902131  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:42.922472  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:42.962753  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:43.042876  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:43.203319  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:43.523966  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:44.164889  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:45.445460  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:48.006629  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:53.127740  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:34:57.863616  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:35:03.368866  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-548271 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m23.317729759s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-548271 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-548271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-548271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-548271: (1.111726447s)
--- PASS: TestForceSystemdFlag (84.72s)

                                                
                                    
x
+
TestForceSystemdEnv (99.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-783873 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-783873 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m38.431976385s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-783873 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-783873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-783873
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-783873: (1.038399823s)
--- PASS: TestForceSystemdEnv (99.69s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.84s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.84s)

                                                
                                    
x
+
TestErrorSpam/setup (49.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-789903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789903 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-789903 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-789903 --driver=kvm2 : (49.09460836s)
--- PASS: TestErrorSpam/setup (49.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (12.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 stop: (12.385234416s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-789903 --log_dir /tmp/nospam-789903 stop
--- PASS: TestErrorSpam/stop (12.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/test/nested/copy/122595/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-181199 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m3.857390789s)
--- PASS: TestFunctional/serial/StartWithProxy (63.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-181199 --alsologtostderr -v=8: (36.725895352s)
functional_test.go:659: soft start took 36.726938965s for "functional-181199" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-181199 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-181199 /tmp/TestFunctionalserialCacheCmdcacheadd_local2928085348/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache add minikube-local-cache-test:functional-181199
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 cache add minikube-local-cache-test:functional-181199: (1.004263488s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache delete minikube-local-cache-test:functional-181199
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-181199
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (254.387315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 kubectl -- --context functional-181199 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-181199 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0229 00:54:10.695180  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:10.700936  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:10.711877  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:10.732170  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:10.772962  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:10.853431  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:11.013845  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:11.334396  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:11.974999  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:13.255492  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:15.816324  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:20.936594  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:54:31.177465  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-181199 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.657182767s)
functional_test.go:757: restart took 40.657320843s for "functional-181199" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-181199 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 logs
E0229 00:54:51.658025  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 logs: (1.065295938s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 logs --file /tmp/TestFunctionalserialLogsFileCmd2696158342/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 logs --file /tmp/TestFunctionalserialLogsFileCmd2696158342/001/logs.txt: (1.071207888s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-181199 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-181199
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-181199: exit status 115 (289.664407ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.142:30239 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-181199 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 config get cpus: exit status 14 (87.636352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 config get cpus: exit status 14 (75.444257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-181199 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-181199 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 129160: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-181199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (170.172405ms)

                                                
                                                
-- stdout --
	* [functional-181199] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 00:54:59.313695  128893 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:54:59.313903  128893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:54:59.313952  128893 out.go:304] Setting ErrFile to fd 2...
	I0229 00:54:59.313969  128893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:54:59.314234  128893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 00:54:59.314764  128893 out.go:298] Setting JSON to false
	I0229 00:54:59.315763  128893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2251,"bootTime":1709165849,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:54:59.315827  128893 start.go:139] virtualization: kvm guest
	I0229 00:54:59.317877  128893 out.go:177] * [functional-181199] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 00:54:59.319286  128893 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 00:54:59.320598  128893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:54:59.319319  128893 notify.go:220] Checking for updates...
	I0229 00:54:59.323372  128893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:54:59.325027  128893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:54:59.326459  128893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 00:54:59.328049  128893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 00:54:59.329857  128893 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 00:54:59.330268  128893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 00:54:59.330304  128893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 00:54:59.348614  128893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
	I0229 00:54:59.349070  128893 main.go:141] libmachine: () Calling .GetVersion
	I0229 00:54:59.349828  128893 main.go:141] libmachine: Using API Version  1
	I0229 00:54:59.349859  128893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 00:54:59.350321  128893 main.go:141] libmachine: () Calling .GetMachineName
	I0229 00:54:59.350638  128893 main.go:141] libmachine: (functional-181199) Calling .DriverName
	I0229 00:54:59.350978  128893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:54:59.351429  128893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 00:54:59.351464  128893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 00:54:59.366254  128893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I0229 00:54:59.366669  128893 main.go:141] libmachine: () Calling .GetVersion
	I0229 00:54:59.367058  128893 main.go:141] libmachine: Using API Version  1
	I0229 00:54:59.367085  128893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 00:54:59.367360  128893 main.go:141] libmachine: () Calling .GetMachineName
	I0229 00:54:59.367549  128893 main.go:141] libmachine: (functional-181199) Calling .DriverName
	I0229 00:54:59.400944  128893 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 00:54:59.402902  128893 start.go:299] selected driver: kvm2
	I0229 00:54:59.402922  128893 start.go:903] validating driver "kvm2" against &{Name:functional-181199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-181199 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:54:59.403044  128893 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 00:54:59.405638  128893 out.go:177] 
	W0229 00:54:59.407029  128893 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 00:54:59.408444  128893 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-181199 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (168.781964ms)

                                                
                                                
-- stdout --
	* [functional-181199] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 00:54:59.128876  128842 out.go:291] Setting OutFile to fd 1 ...
	I0229 00:54:59.129157  128842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:54:59.129169  128842 out.go:304] Setting ErrFile to fd 2...
	I0229 00:54:59.129176  128842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 00:54:59.129450  128842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 00:54:59.130012  128842 out.go:298] Setting JSON to false
	I0229 00:54:59.131001  128842 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2250,"bootTime":1709165849,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 00:54:59.131085  128842 start.go:139] virtualization: kvm guest
	I0229 00:54:59.133457  128842 out.go:177] * [functional-181199] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 00:54:59.134847  128842 out.go:177]   - MINIKUBE_LOCATION=18063
	I0229 00:54:59.134906  128842 notify.go:220] Checking for updates...
	I0229 00:54:59.136212  128842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 00:54:59.137590  128842 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	I0229 00:54:59.138876  128842 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	I0229 00:54:59.140410  128842 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 00:54:59.141883  128842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 00:54:59.143797  128842 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 00:54:59.144252  128842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 00:54:59.144316  128842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 00:54:59.160194  128842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0229 00:54:59.160670  128842 main.go:141] libmachine: () Calling .GetVersion
	I0229 00:54:59.161260  128842 main.go:141] libmachine: Using API Version  1
	I0229 00:54:59.161296  128842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 00:54:59.161860  128842 main.go:141] libmachine: () Calling .GetMachineName
	I0229 00:54:59.162108  128842 main.go:141] libmachine: (functional-181199) Calling .DriverName
	I0229 00:54:59.162451  128842 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 00:54:59.162755  128842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 00:54:59.162804  128842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 00:54:59.178703  128842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44859
	I0229 00:54:59.179227  128842 main.go:141] libmachine: () Calling .GetVersion
	I0229 00:54:59.179765  128842 main.go:141] libmachine: Using API Version  1
	I0229 00:54:59.179786  128842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 00:54:59.180148  128842 main.go:141] libmachine: () Calling .GetMachineName
	I0229 00:54:59.180322  128842 main.go:141] libmachine: (functional-181199) Calling .DriverName
	I0229 00:54:59.229576  128842 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 00:54:59.230972  128842 start.go:299] selected driver: kvm2
	I0229 00:54:59.230990  128842 start.go:903] validating driver "kvm2" against &{Name:functional-181199 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-181199 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.142 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 00:54:59.231143  128842 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 00:54:59.233555  128842 out.go:177] 
	W0229 00:54:59.234693  128842 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 00:54:59.235794  128842 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-181199 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-181199 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fqjtc" [73a4f2de-4168-46a3-a255-194f57ffc14e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fqjtc" [73a4f2de-4168-46a3-a255-194f57ffc14e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.081085597s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.142:31860
functional_test.go:1671: http://192.168.39.142:31860: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fqjtc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.142:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.142:31860
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f16dae8f-bbe6-42b3-b83a-66e5ed25d11b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00883198s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-181199 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-181199 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-181199 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-181199 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-181199 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ae582b1-8d22-4903-bae7-b217596c7f87] Pending
helpers_test.go:344: "sp-pod" [8ae582b1-8d22-4903-bae7-b217596c7f87] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ae582b1-8d22-4903-bae7-b217596c7f87] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005875377s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-181199 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-181199 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-181199 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14ad3c0c-6e51-458b-a83c-bc2764f8ca5e] Pending
helpers_test.go:344: "sp-pod" [14ad3c0c-6e51-458b-a83c-bc2764f8ca5e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14ad3c0c-6e51-458b-a83c-bc2764f8ca5e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003736647s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-181199 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh -n functional-181199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cp functional-181199:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2305171742/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh -n functional-181199 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh -n functional-181199 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-181199 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-q8qbm" [ad71fa7c-995c-4aea-894b-56d1adcd8215] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-q8qbm" [ad71fa7c-995c-4aea-894b-56d1adcd8215] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.005093907s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;": exit status 1 (168.705445ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;": exit status 1 (144.415671ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;": exit status 1 (132.174029ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-181199 exec mysql-859648c796-q8qbm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/122595/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /etc/test/nested/copy/122595/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/122595.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /etc/ssl/certs/122595.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/122595.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /usr/share/ca-certificates/122595.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1225952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /etc/ssl/certs/1225952.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1225952.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /usr/share/ca-certificates/1225952.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-181199 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh "sudo systemctl is-active crio": exit status 1 (249.105193ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-181199 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-181199 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-t6g6m" [a6ed7b45-3cfd-4db6-957d-a1d1a49caa5b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-t6g6m" [a6ed7b45-3cfd-4db6-957d-a1d1a49caa5b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.007081061s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdany-port2026224687/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709168098073991578" to /tmp/TestFunctionalparallelMountCmdany-port2026224687/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709168098073991578" to /tmp/TestFunctionalparallelMountCmdany-port2026224687/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709168098073991578" to /tmp/TestFunctionalparallelMountCmdany-port2026224687/001/test-1709168098073991578
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.606212ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 00:54 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 00:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 00:54 test-1709168098073991578
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh cat /mount-9p/test-1709168098073991578
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-181199 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [642dfaea-6170-439e-bb18-f5072cc7d309] Pending
helpers_test.go:344: "busybox-mount" [642dfaea-6170-439e-bb18-f5072cc7d309] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [642dfaea-6170-439e-bb18-f5072cc7d309] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [642dfaea-6170-439e-bb18-f5072cc7d309] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004800719s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-181199 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdany-port2026224687/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "215.459788ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "60.094976ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "214.989282ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.664134ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181199 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-181199
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-181199
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181199 image ls --format short --alsologtostderr:
I0229 00:55:28.298417  130852 out.go:291] Setting OutFile to fd 1 ...
I0229 00:55:28.298557  130852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.298567  130852 out.go:304] Setting ErrFile to fd 2...
I0229 00:55:28.298572  130852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.298774  130852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:55:28.299372  130852 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.299492  130852 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.299871  130852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.299924  130852 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.315292  130852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
I0229 00:55:28.315807  130852 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.316359  130852 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.316383  130852 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.316800  130852 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.317017  130852 main.go:141] libmachine: (functional-181199) Calling .GetState
I0229 00:55:28.318950  130852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.318986  130852 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.333983  130852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
I0229 00:55:28.334386  130852 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.334953  130852 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.334991  130852 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.335334  130852 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.335536  130852 main.go:141] libmachine: (functional-181199) Calling .DriverName
I0229 00:55:28.335744  130852 ssh_runner.go:195] Run: systemctl --version
I0229 00:55:28.335769  130852 main.go:141] libmachine: (functional-181199) Calling .GetSSHHostname
I0229 00:55:28.338815  130852 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.339276  130852 main.go:141] libmachine: (functional-181199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f3:fb", ip: ""} in network mk-functional-181199: {Iface:virbr1 ExpiryTime:2024-02-29 01:52:38 +0000 UTC Type:0 Mac:52:54:00:c4:f3:fb Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-181199 Clientid:01:52:54:00:c4:f3:fb}
I0229 00:55:28.339366  130852 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined IP address 192.168.39.142 and MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.339471  130852 main.go:141] libmachine: (functional-181199) Calling .GetSSHPort
I0229 00:55:28.339640  130852 main.go:141] libmachine: (functional-181199) Calling .GetSSHKeyPath
I0229 00:55:28.339830  130852 main.go:141] libmachine: (functional-181199) Calling .GetSSHUsername
I0229 00:55:28.339996  130852 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/functional-181199/id_rsa Username:docker}
I0229 00:55:28.474655  130852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 00:55:28.516246  130852 main.go:141] libmachine: Making call to close driver server
I0229 00:55:28.516259  130852 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:28.516541  130852 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:28.516562  130852 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:28.516571  130852 main.go:141] libmachine: Making call to close driver server
I0229 00:55:28.516580  130852 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:28.516805  130852 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:28.516849  130852 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:28.516851  130852 main.go:141] libmachine: (functional-181199) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181199 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/minikube-local-cache-test | functional-181199 | b927f45c58c6d | 30B    |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/google-containers/addon-resizer      | functional-181199 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181199 image ls --format table --alsologtostderr:
I0229 00:55:29.125720  130976 out.go:291] Setting OutFile to fd 1 ...
I0229 00:55:29.126059  130976 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:29.126074  130976 out.go:304] Setting ErrFile to fd 2...
I0229 00:55:29.126080  130976 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:29.126371  130976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:55:29.127188  130976 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:29.127345  130976 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:29.127897  130976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:29.127965  130976 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:29.144272  130976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
I0229 00:55:29.144756  130976 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:29.145392  130976 main.go:141] libmachine: Using API Version  1
I0229 00:55:29.145417  130976 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:29.145806  130976 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:29.146043  130976 main.go:141] libmachine: (functional-181199) Calling .GetState
I0229 00:55:29.148181  130976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:29.148233  130976 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:29.163045  130976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
I0229 00:55:29.163507  130976 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:29.163985  130976 main.go:141] libmachine: Using API Version  1
I0229 00:55:29.164011  130976 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:29.164358  130976 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:29.164568  130976 main.go:141] libmachine: (functional-181199) Calling .DriverName
I0229 00:55:29.164817  130976 ssh_runner.go:195] Run: systemctl --version
I0229 00:55:29.164857  130976 main.go:141] libmachine: (functional-181199) Calling .GetSSHHostname
I0229 00:55:29.167809  130976 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:29.168231  130976 main.go:141] libmachine: (functional-181199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f3:fb", ip: ""} in network mk-functional-181199: {Iface:virbr1 ExpiryTime:2024-02-29 01:52:38 +0000 UTC Type:0 Mac:52:54:00:c4:f3:fb Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-181199 Clientid:01:52:54:00:c4:f3:fb}
I0229 00:55:29.168260  130976 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined IP address 192.168.39.142 and MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:29.168360  130976 main.go:141] libmachine: (functional-181199) Calling .GetSSHPort
I0229 00:55:29.168551  130976 main.go:141] libmachine: (functional-181199) Calling .GetSSHKeyPath
I0229 00:55:29.168736  130976 main.go:141] libmachine: (functional-181199) Calling .GetSSHUsername
I0229 00:55:29.168904  130976 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/functional-181199/id_rsa Username:docker}
I0229 00:55:29.314388  130976 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 00:55:29.340900  130976 main.go:141] libmachine: Making call to close driver server
I0229 00:55:29.340916  130976 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:29.341200  130976 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:29.341219  130976 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:29.341228  130976 main.go:141] libmachine: Making call to close driver server
I0229 00:55:29.341229  130976 main.go:141] libmachine: (functional-181199) DBG | Closing plugin on server side
I0229 00:55:29.341236  130976 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:29.341478  130976 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:29.341500  130976 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181199 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b927f45c58c6dc4294d1115b6fe5417d8c326638cadc51aa69b862abb72615db","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-181199"],"size":"30"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09
683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-181199"],"size":"32900000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":
[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4
"],"size":"60100000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181199 image ls --format json --alsologtostderr:
I0229 00:55:28.865612  130927 out.go:291] Setting OutFile to fd 1 ...
I0229 00:55:28.865741  130927 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.865750  130927 out.go:304] Setting ErrFile to fd 2...
I0229 00:55:28.865754  130927 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.866017  130927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:55:28.867417  130927 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.867728  130927 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.868331  130927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.868375  130927 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.883104  130927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
I0229 00:55:28.883506  130927 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.884087  130927 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.884110  130927 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.884454  130927 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.884659  130927 main.go:141] libmachine: (functional-181199) Calling .GetState
I0229 00:55:28.886301  130927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.886347  130927 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.900912  130927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38393
I0229 00:55:28.901417  130927 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.901868  130927 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.901888  130927 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.902392  130927 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.902613  130927 main.go:141] libmachine: (functional-181199) Calling .DriverName
I0229 00:55:28.902870  130927 ssh_runner.go:195] Run: systemctl --version
I0229 00:55:28.902903  130927 main.go:141] libmachine: (functional-181199) Calling .GetSSHHostname
I0229 00:55:28.905654  130927 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.906096  130927 main.go:141] libmachine: (functional-181199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f3:fb", ip: ""} in network mk-functional-181199: {Iface:virbr1 ExpiryTime:2024-02-29 01:52:38 +0000 UTC Type:0 Mac:52:54:00:c4:f3:fb Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-181199 Clientid:01:52:54:00:c4:f3:fb}
I0229 00:55:28.906126  130927 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined IP address 192.168.39.142 and MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.906264  130927 main.go:141] libmachine: (functional-181199) Calling .GetSSHPort
I0229 00:55:28.906455  130927 main.go:141] libmachine: (functional-181199) Calling .GetSSHKeyPath
I0229 00:55:28.906591  130927 main.go:141] libmachine: (functional-181199) Calling .GetSSHUsername
I0229 00:55:28.906717  130927 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/functional-181199/id_rsa Username:docker}
I0229 00:55:28.995226  130927 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 00:55:29.055923  130927 main.go:141] libmachine: Making call to close driver server
I0229 00:55:29.055934  130927 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:29.056238  130927 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:29.056258  130927 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:29.056267  130927 main.go:141] libmachine: Making call to close driver server
I0229 00:55:29.056276  130927 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:29.056567  130927 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:29.056598  130927 main.go:141] libmachine: (functional-181199) DBG | Closing plugin on server side
I0229 00:55:29.056617  130927 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181199 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b927f45c58c6dc4294d1115b6fe5417d8c326638cadc51aa69b862abb72615db
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-181199
size: "30"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-181199
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181199 image ls --format yaml --alsologtostderr:
I0229 00:55:28.590537  130875 out.go:291] Setting OutFile to fd 1 ...
I0229 00:55:28.590683  130875 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.590698  130875 out.go:304] Setting ErrFile to fd 2...
I0229 00:55:28.590705  130875 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.591011  130875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:55:28.591861  130875 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.591991  130875 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.592385  130875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.592436  130875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.607467  130875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
I0229 00:55:28.607993  130875 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.608674  130875 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.608705  130875 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.609150  130875 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.609434  130875 main.go:141] libmachine: (functional-181199) Calling .GetState
I0229 00:55:28.611553  130875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.611601  130875 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.627155  130875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
I0229 00:55:28.627615  130875 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.628151  130875 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.628189  130875 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.628617  130875 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.628844  130875 main.go:141] libmachine: (functional-181199) Calling .DriverName
I0229 00:55:28.629073  130875 ssh_runner.go:195] Run: systemctl --version
I0229 00:55:28.629101  130875 main.go:141] libmachine: (functional-181199) Calling .GetSSHHostname
I0229 00:55:28.631984  130875 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.632420  130875 main.go:141] libmachine: (functional-181199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f3:fb", ip: ""} in network mk-functional-181199: {Iface:virbr1 ExpiryTime:2024-02-29 01:52:38 +0000 UTC Type:0 Mac:52:54:00:c4:f3:fb Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-181199 Clientid:01:52:54:00:c4:f3:fb}
I0229 00:55:28.632444  130875 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined IP address 192.168.39.142 and MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:28.632622  130875 main.go:141] libmachine: (functional-181199) Calling .GetSSHPort
I0229 00:55:28.632822  130875 main.go:141] libmachine: (functional-181199) Calling .GetSSHKeyPath
I0229 00:55:28.632979  130875 main.go:141] libmachine: (functional-181199) Calling .GetSSHUsername
I0229 00:55:28.633135  130875 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/functional-181199/id_rsa Username:docker}
I0229 00:55:28.761125  130875 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 00:55:28.798794  130875 main.go:141] libmachine: Making call to close driver server
I0229 00:55:28.798810  130875 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:28.799087  130875 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:28.799111  130875 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:28.799126  130875 main.go:141] libmachine: Making call to close driver server
I0229 00:55:28.799139  130875 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:28.799151  130875 main.go:141] libmachine: (functional-181199) DBG | Closing plugin on server side
I0229 00:55:28.799360  130875 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:28.799384  130875 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh pgrep buildkitd: exit status 1 (217.045512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image build -t localhost/my-image:functional-181199 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image build -t localhost/my-image:functional-181199 testdata/build --alsologtostderr: (4.019448765s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181199 image build -t localhost/my-image:functional-181199 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Waiting
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in a96da6d1e3a1
Removing intermediate container a96da6d1e3a1
---> c3d5440c327f
Step 3/3 : ADD content.txt /
---> 846c074e5e8b
Successfully built 846c074e5e8b
Successfully tagged localhost/my-image:functional-181199
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181199 image build -t localhost/my-image:functional-181199 testdata/build --alsologtostderr:
I0229 00:55:28.960911  130952 out.go:291] Setting OutFile to fd 1 ...
I0229 00:55:28.961185  130952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.961194  130952 out.go:304] Setting ErrFile to fd 2...
I0229 00:55:28.961198  130952 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:55:28.961356  130952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:55:28.961942  130952 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.962459  130952 config.go:182] Loaded profile config "functional-181199": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 00:55:28.962887  130952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.962928  130952 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.977558  130952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
I0229 00:55:28.978067  130952 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.978678  130952 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.978712  130952 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.979053  130952 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.979256  130952 main.go:141] libmachine: (functional-181199) Calling .GetState
I0229 00:55:28.981251  130952 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:55:28.981297  130952 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:55:28.996881  130952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40695
I0229 00:55:28.997360  130952 main.go:141] libmachine: () Calling .GetVersion
I0229 00:55:28.997874  130952 main.go:141] libmachine: Using API Version  1
I0229 00:55:28.997899  130952 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:55:28.998263  130952 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:55:28.998474  130952 main.go:141] libmachine: (functional-181199) Calling .DriverName
I0229 00:55:28.998685  130952 ssh_runner.go:195] Run: systemctl --version
I0229 00:55:28.998710  130952 main.go:141] libmachine: (functional-181199) Calling .GetSSHHostname
I0229 00:55:29.001836  130952 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:29.002331  130952 main.go:141] libmachine: (functional-181199) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f3:fb", ip: ""} in network mk-functional-181199: {Iface:virbr1 ExpiryTime:2024-02-29 01:52:38 +0000 UTC Type:0 Mac:52:54:00:c4:f3:fb Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:functional-181199 Clientid:01:52:54:00:c4:f3:fb}
I0229 00:55:29.002365  130952 main.go:141] libmachine: (functional-181199) DBG | domain functional-181199 has defined IP address 192.168.39.142 and MAC address 52:54:00:c4:f3:fb in network mk-functional-181199
I0229 00:55:29.002514  130952 main.go:141] libmachine: (functional-181199) Calling .GetSSHPort
I0229 00:55:29.002693  130952 main.go:141] libmachine: (functional-181199) Calling .GetSSHKeyPath
I0229 00:55:29.002873  130952 main.go:141] libmachine: (functional-181199) Calling .GetSSHUsername
I0229 00:55:29.003055  130952 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/functional-181199/id_rsa Username:docker}
I0229 00:55:29.126107  130952 build_images.go:151] Building image from path: /tmp/build.1021279076.tar
I0229 00:55:29.126179  130952 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 00:55:29.142134  130952 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1021279076.tar
I0229 00:55:29.148018  130952 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1021279076.tar: stat -c "%s %y" /var/lib/minikube/build/build.1021279076.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1021279076.tar': No such file or directory
I0229 00:55:29.148051  130952 ssh_runner.go:362] scp /tmp/build.1021279076.tar --> /var/lib/minikube/build/build.1021279076.tar (3072 bytes)
I0229 00:55:29.236802  130952 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1021279076
I0229 00:55:29.262490  130952 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1021279076 -xf /var/lib/minikube/build/build.1021279076.tar
I0229 00:55:29.282809  130952 docker.go:360] Building image: /var/lib/minikube/build/build.1021279076
I0229 00:55:29.282907  130952 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-181199 /var/lib/minikube/build/build.1021279076
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0229 00:55:32.890279  130952 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-181199 /var/lib/minikube/build/build.1021279076: (3.607342426s)
I0229 00:55:32.890348  130952 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1021279076
I0229 00:55:32.903590  130952 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1021279076.tar
I0229 00:55:32.917747  130952 build_images.go:207] Built localhost/my-image:functional-181199 from /tmp/build.1021279076.tar
I0229 00:55:32.917797  130952 build_images.go:123] succeeded building to: functional-181199
I0229 00:55:32.917824  130952 build_images.go:124] failed building to: 
I0229 00:55:32.917861  130952 main.go:141] libmachine: Making call to close driver server
I0229 00:55:32.917877  130952 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:32.918167  130952 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:32.918184  130952 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 00:55:32.918201  130952 main.go:141] libmachine: (functional-181199) DBG | Closing plugin on server side
I0229 00:55:32.918212  130952 main.go:141] libmachine: Making call to close driver server
I0229 00:55:32.918223  130952 main.go:141] libmachine: (functional-181199) Calling .Close
I0229 00:55:32.918497  130952 main.go:141] libmachine: Successfully made call to close driver server
I0229 00:55:32.918515  130952 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.293732049s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-181199
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr: (4.251945981s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr: (2.47375s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.177546306s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-181199
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image load --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr: (4.618940389s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdspecific-port3232527178/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.992016ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdspecific-port3232527178/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh "sudo umount -f /mount-9p": exit status 1 (263.343369ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-181199 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdspecific-port3232527178/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T" /mount1: exit status 1 (345.189451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-181199 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181199 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3393461043/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service list -o json
functional_test.go:1490: Took "466.614859ms" to run "out/minikube-linux-amd64 -p functional-181199 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.142:32443
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.142:32443
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 129996: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-181199 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [379ed78c-df0e-4369-a0ab-74ea2a8a2665] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [379ed78c-df0e-4369-a0ab-74ea2a8a2665] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.190411182s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image save gcr.io/google-containers/addon-resizer:functional-181199 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image save gcr.io/google-containers/addon-resizer:functional-181199 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.005492364s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image rm gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.471165051s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-181199
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 image save --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr
2024/02/29 00:55:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-181199 image save --daemon gcr.io/google-containers/addon-resizer:functional-181199 --alsologtostderr: (1.448042753s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-181199
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-181199 docker-env) && out/minikube-linux-amd64 status -p functional-181199"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-181199 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 update-context --alsologtostderr -v=2
E0229 00:55:32.618997  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181199 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-181199 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.201.37 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-181199 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-181199
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-181199
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-181199
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (319.85s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-335344 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-335344 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m1.584173179s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-335344 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-335344 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.109780059s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-335344 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-335344 addons enable gvisor: (3.34672464s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [39c82a2a-2dbd-40cd-98e0-fb0c81694c8e] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.007020436s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-335344 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5cfaff5c-fcf5-4f42-85ec-06b286dd2874] Pending
helpers_test.go:344: "nginx-gvisor" [5cfaff5c-fcf5-4f42-85ec-06b286dd2874] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [5cfaff5c-fcf5-4f42-85ec-06b286dd2874] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 16.004761834s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-335344
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-335344: (1m32.276675683s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-335344 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-335344 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (44.223812146s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [39c82a2a-2dbd-40cd-98e0-fb0c81694c8e] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00486844s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5cfaff5c-fcf5-4f42-85ec-06b286dd2874] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.003397099s
helpers_test.go:175: Cleaning up "gvisor-335344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-335344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-335344: (1.059069923s)
--- PASS: TestGvisorAddon (319.85s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (45.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-236730 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-236730 --driver=kvm2 : (45.529066016s)
--- PASS: TestImageBuild/serial/Setup (45.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-236730
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-236730: (1.513403287s)
--- PASS: TestImageBuild/serial/NormalBuild (1.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-236730
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-236730: (1.009380425s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-236730
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-236730
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-853116 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-853116 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m5.453913744s)
--- PASS: TestJSONOutput/start/Command (65.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-853116 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-853116 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-853116 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-853116 --output=json --user=testUser: (8.110691607s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-742159 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-742159 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.774839ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e9f9d626-ffac-4f46-bf71-681114a23ec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-742159] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4080bd6d-3a8f-43d8-a2d4-8b4d8a2440aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18063"}}
	{"specversion":"1.0","id":"91c357be-0de3-41ec-ae70-25a26bb2dc93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d102ee63-9a2e-441c-b06a-ad85f3c79fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig"}}
	{"specversion":"1.0","id":"4fe56e6d-5cb5-4d4f-9d71-2110c5da5bf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube"}}
	{"specversion":"1.0","id":"ec9bfac4-7873-4d26-a507-31f4014558df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ff2404e9-d747-45de-9990-caf4dd9859a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9cce973a-757e-4bcf-9cd9-22d2bd41231d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-742159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-742159
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (103.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-085802 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-085802 --driver=kvm2 : (51.203767806s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-088890 --driver=kvm2 
E0229 01:09:10.694796  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-088890 --driver=kvm2 : (49.638052644s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-085802
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-088890
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-088890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-088890
helpers_test.go:175: Cleaning up "first-085802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-085802
--- PASS: TestMinikubeProfile (103.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-444288 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0229 01:09:57.865747  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-444288 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.671718091s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-444288 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-444288 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-461413 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0229 01:10:33.742538  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-461413 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.97317336s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-444288 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-461413
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-461413: (2.095053151s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-461413
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-461413: (23.099817975s)
--- PASS: TestMountStart/serial/RestartStopped (24.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-461413 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074064 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074064 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (1m56.524186458s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-074064 -- rollout status deployment/busybox: (2.670794457s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-7s8jp -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-c2wf4 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-7s8jp -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-c2wf4 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-7s8jp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-c2wf4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-7s8jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-7s8jp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-c2wf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-074064 -- exec busybox-5b5d89c9d6-c2wf4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-074064 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-074064 -v 3 --alsologtostderr: (46.554134434s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-074064 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp testdata/cp-test.txt multinode-074064:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1578894162/001/cp-test_multinode-074064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064:/home/docker/cp-test.txt multinode-074064-m02:/home/docker/cp-test_multinode-074064_multinode-074064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test_multinode-074064_multinode-074064-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064:/home/docker/cp-test.txt multinode-074064-m03:/home/docker/cp-test_multinode-074064_multinode-074064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test_multinode-074064_multinode-074064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp testdata/cp-test.txt multinode-074064-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1578894162/001/cp-test_multinode-074064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m02:/home/docker/cp-test.txt multinode-074064:/home/docker/cp-test_multinode-074064-m02_multinode-074064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test.txt"
E0229 01:14:10.694933  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test_multinode-074064-m02_multinode-074064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m02:/home/docker/cp-test.txt multinode-074064-m03:/home/docker/cp-test_multinode-074064-m02_multinode-074064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test_multinode-074064-m02_multinode-074064-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp testdata/cp-test.txt multinode-074064-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1578894162/001/cp-test_multinode-074064-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m03:/home/docker/cp-test.txt multinode-074064:/home/docker/cp-test_multinode-074064-m03_multinode-074064.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064 "sudo cat /home/docker/cp-test_multinode-074064-m03_multinode-074064.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 cp multinode-074064-m03:/home/docker/cp-test.txt multinode-074064-m02:/home/docker/cp-test_multinode-074064-m03_multinode-074064-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 ssh -n multinode-074064-m02 "sudo cat /home/docker/cp-test_multinode-074064-m03_multinode-074064-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-074064 node stop m03: (2.436980761s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074064 status: exit status 7 (441.099156ms)

                                                
                                                
-- stdout --
	multinode-074064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr: exit status 7 (441.047202ms)

                                                
                                                
-- stdout --
	multinode-074064
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-074064-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-074064-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:14:17.048110  138906 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:14:17.048345  138906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:14:17.048354  138906 out.go:304] Setting ErrFile to fd 2...
	I0229 01:14:17.048358  138906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:14:17.048545  138906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:14:17.048729  138906 out.go:298] Setting JSON to false
	I0229 01:14:17.048759  138906 mustload.go:65] Loading cluster: multinode-074064
	I0229 01:14:17.048867  138906 notify.go:220] Checking for updates...
	I0229 01:14:17.049189  138906 config.go:182] Loaded profile config "multinode-074064": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:14:17.049207  138906 status.go:255] checking status of multinode-074064 ...
	I0229 01:14:17.049650  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.049717  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.067314  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0229 01:14:17.067670  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.068266  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.068286  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.068670  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.068872  138906 main.go:141] libmachine: (multinode-074064) Calling .GetState
	I0229 01:14:17.070302  138906 status.go:330] multinode-074064 host status = "Running" (err=<nil>)
	I0229 01:14:17.070323  138906 host.go:66] Checking if "multinode-074064" exists ...
	I0229 01:14:17.070678  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.070722  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.085048  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0229 01:14:17.085382  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.085879  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.085907  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.086219  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.086414  138906 main.go:141] libmachine: (multinode-074064) Calling .GetIP
	I0229 01:14:17.089103  138906 main.go:141] libmachine: (multinode-074064) DBG | domain multinode-074064 has defined MAC address 52:54:00:73:f7:ff in network mk-multinode-074064
	I0229 01:14:17.089489  138906 main.go:141] libmachine: (multinode-074064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:f7:ff", ip: ""} in network mk-multinode-074064: {Iface:virbr1 ExpiryTime:2024-02-29 02:11:31 +0000 UTC Type:0 Mac:52:54:00:73:f7:ff Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-074064 Clientid:01:52:54:00:73:f7:ff}
	I0229 01:14:17.089517  138906 main.go:141] libmachine: (multinode-074064) DBG | domain multinode-074064 has defined IP address 192.168.39.240 and MAC address 52:54:00:73:f7:ff in network mk-multinode-074064
	I0229 01:14:17.089663  138906 host.go:66] Checking if "multinode-074064" exists ...
	I0229 01:14:17.090075  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.090124  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.104613  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I0229 01:14:17.105022  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.105443  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.105462  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.105740  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.105974  138906 main.go:141] libmachine: (multinode-074064) Calling .DriverName
	I0229 01:14:17.106187  138906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:14:17.106219  138906 main.go:141] libmachine: (multinode-074064) Calling .GetSSHHostname
	I0229 01:14:17.108750  138906 main.go:141] libmachine: (multinode-074064) DBG | domain multinode-074064 has defined MAC address 52:54:00:73:f7:ff in network mk-multinode-074064
	I0229 01:14:17.109110  138906 main.go:141] libmachine: (multinode-074064) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:f7:ff", ip: ""} in network mk-multinode-074064: {Iface:virbr1 ExpiryTime:2024-02-29 02:11:31 +0000 UTC Type:0 Mac:52:54:00:73:f7:ff Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:multinode-074064 Clientid:01:52:54:00:73:f7:ff}
	I0229 01:14:17.109136  138906 main.go:141] libmachine: (multinode-074064) DBG | domain multinode-074064 has defined IP address 192.168.39.240 and MAC address 52:54:00:73:f7:ff in network mk-multinode-074064
	I0229 01:14:17.109313  138906 main.go:141] libmachine: (multinode-074064) Calling .GetSSHPort
	I0229 01:14:17.109468  138906 main.go:141] libmachine: (multinode-074064) Calling .GetSSHKeyPath
	I0229 01:14:17.109626  138906 main.go:141] libmachine: (multinode-074064) Calling .GetSSHUsername
	I0229 01:14:17.109751  138906 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/multinode-074064/id_rsa Username:docker}
	I0229 01:14:17.201893  138906 ssh_runner.go:195] Run: systemctl --version
	I0229 01:14:17.207986  138906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:14:17.223327  138906 kubeconfig.go:92] found "multinode-074064" server: "https://192.168.39.240:8443"
	I0229 01:14:17.223359  138906 api_server.go:166] Checking apiserver status ...
	I0229 01:14:17.223388  138906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 01:14:17.236748  138906 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup
	W0229 01:14:17.246624  138906 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 01:14:17.246668  138906 ssh_runner.go:195] Run: ls
	I0229 01:14:17.251429  138906 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0229 01:14:17.256402  138906 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0229 01:14:17.256423  138906 status.go:421] multinode-074064 apiserver status = Running (err=<nil>)
	I0229 01:14:17.256435  138906 status.go:257] multinode-074064 status: &{Name:multinode-074064 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:14:17.256469  138906 status.go:255] checking status of multinode-074064-m02 ...
	I0229 01:14:17.256832  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.256863  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.272508  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0229 01:14:17.272980  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.273456  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.273477  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.273849  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.274049  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetState
	I0229 01:14:17.275602  138906 status.go:330] multinode-074064-m02 host status = "Running" (err=<nil>)
	I0229 01:14:17.275622  138906 host.go:66] Checking if "multinode-074064-m02" exists ...
	I0229 01:14:17.275889  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.275921  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.291900  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0229 01:14:17.292287  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.292713  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.292738  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.293060  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.293251  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetIP
	I0229 01:14:17.295914  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | domain multinode-074064-m02 has defined MAC address 52:54:00:0a:8a:1b in network mk-multinode-074064
	I0229 01:14:17.296307  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8a:1b", ip: ""} in network mk-multinode-074064: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:44 +0000 UTC Type:0 Mac:52:54:00:0a:8a:1b Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-074064-m02 Clientid:01:52:54:00:0a:8a:1b}
	I0229 01:14:17.296329  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | domain multinode-074064-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:0a:8a:1b in network mk-multinode-074064
	I0229 01:14:17.296474  138906 host.go:66] Checking if "multinode-074064-m02" exists ...
	I0229 01:14:17.296867  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.296910  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.311307  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35419
	I0229 01:14:17.311722  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.312206  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.312225  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.312496  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.312663  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .DriverName
	I0229 01:14:17.312812  138906 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 01:14:17.312830  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetSSHHostname
	I0229 01:14:17.315393  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | domain multinode-074064-m02 has defined MAC address 52:54:00:0a:8a:1b in network mk-multinode-074064
	I0229 01:14:17.315819  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:8a:1b", ip: ""} in network mk-multinode-074064: {Iface:virbr1 ExpiryTime:2024-02-29 02:12:44 +0000 UTC Type:0 Mac:52:54:00:0a:8a:1b Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-074064-m02 Clientid:01:52:54:00:0a:8a:1b}
	I0229 01:14:17.315842  138906 main.go:141] libmachine: (multinode-074064-m02) DBG | domain multinode-074064-m02 has defined IP address 192.168.39.46 and MAC address 52:54:00:0a:8a:1b in network mk-multinode-074064
	I0229 01:14:17.315953  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetSSHPort
	I0229 01:14:17.316125  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetSSHKeyPath
	I0229 01:14:17.316266  138906 main.go:141] libmachine: (multinode-074064-m02) Calling .GetSSHUsername
	I0229 01:14:17.316377  138906 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/multinode-074064-m02/id_rsa Username:docker}
	I0229 01:14:17.397594  138906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 01:14:17.412885  138906 status.go:257] multinode-074064-m02 status: &{Name:multinode-074064-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:14:17.412920  138906 status.go:255] checking status of multinode-074064-m03 ...
	I0229 01:14:17.413225  138906 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:14:17.413256  138906 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:14:17.428488  138906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
	I0229 01:14:17.428868  138906 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:14:17.429357  138906 main.go:141] libmachine: Using API Version  1
	I0229 01:14:17.429384  138906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:14:17.429757  138906 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:14:17.429990  138906 main.go:141] libmachine: (multinode-074064-m03) Calling .GetState
	I0229 01:14:17.431454  138906 status.go:330] multinode-074064-m03 host status = "Stopped" (err=<nil>)
	I0229 01:14:17.431468  138906 status.go:343] host is not running, skipping remaining checks
	I0229 01:14:17.431473  138906 status.go:257] multinode-074064-m03 status: &{Name:multinode-074064-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (141.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 node start m03 --alsologtostderr
E0229 01:14:57.864224  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:16:20.913020  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-074064 node start m03 --alsologtostderr: (2m20.745836451s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (141.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (172.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074064
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-074064
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-074064: (27.6831626s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074064 --wait=true -v=8 --alsologtostderr
E0229 01:19:10.694876  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074064 --wait=true -v=8 --alsologtostderr: (2m24.342842725s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074064
--- PASS: TestMultiNode/serial/RestartKeepsNodes (172.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 node delete m03
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-074064 stop: (25.33184831s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status
E0229 01:19:57.863328  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074064 status: exit status 7 (95.22768ms)

                                                
                                                
-- stdout --
	multinode-074064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr: exit status 7 (97.758281ms)

                                                
                                                
-- stdout --
	multinode-074064
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-074064-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 01:19:57.958274  141082 out.go:291] Setting OutFile to fd 1 ...
	I0229 01:19:57.958538  141082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:19:57.958547  141082 out.go:304] Setting ErrFile to fd 2...
	I0229 01:19:57.958551  141082 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 01:19:57.958734  141082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
	I0229 01:19:57.958895  141082 out.go:298] Setting JSON to false
	I0229 01:19:57.958921  141082 mustload.go:65] Loading cluster: multinode-074064
	I0229 01:19:57.959017  141082 notify.go:220] Checking for updates...
	I0229 01:19:57.959378  141082 config.go:182] Loaded profile config "multinode-074064": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 01:19:57.959395  141082 status.go:255] checking status of multinode-074064 ...
	I0229 01:19:57.959912  141082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:19:57.959958  141082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:19:57.980583  141082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0229 01:19:57.980951  141082 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:19:57.981631  141082 main.go:141] libmachine: Using API Version  1
	I0229 01:19:57.981655  141082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:19:57.982010  141082 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:19:57.982218  141082 main.go:141] libmachine: (multinode-074064) Calling .GetState
	I0229 01:19:57.983743  141082 status.go:330] multinode-074064 host status = "Stopped" (err=<nil>)
	I0229 01:19:57.983755  141082 status.go:343] host is not running, skipping remaining checks
	I0229 01:19:57.983762  141082 status.go:257] multinode-074064 status: &{Name:multinode-074064 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 01:19:57.983783  141082 status.go:255] checking status of multinode-074064-m02 ...
	I0229 01:19:57.984046  141082 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 01:19:57.984078  141082 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 01:19:57.998656  141082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
	I0229 01:19:57.999054  141082 main.go:141] libmachine: () Calling .GetVersion
	I0229 01:19:57.999430  141082 main.go:141] libmachine: Using API Version  1
	I0229 01:19:57.999460  141082 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 01:19:57.999831  141082 main.go:141] libmachine: () Calling .GetMachineName
	I0229 01:19:58.000109  141082 main.go:141] libmachine: (multinode-074064-m02) Calling .GetState
	I0229 01:19:58.001678  141082 status.go:330] multinode-074064-m02 host status = "Stopped" (err=<nil>)
	I0229 01:19:58.001693  141082 status.go:343] host is not running, skipping remaining checks
	I0229 01:19:58.001701  141082 status.go:257] multinode-074064-m02 status: &{Name:multinode-074064-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074064 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074064 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m54.979480129s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-074064 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.53s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-074064
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074064-m02 --driver=kvm2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-074064-m02 --driver=kvm2 : exit status 14 (76.473366ms)

                                                
                                                
-- stdout --
	* [multinode-074064-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-074064-m02' is duplicated with machine name 'multinode-074064-m02' in profile 'multinode-074064'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-074064-m03 --driver=kvm2 
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-074064-m03 --driver=kvm2 : (51.075499012s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-074064
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-074064: exit status 80 (227.439608ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-074064
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-074064-m03 already exists in multinode-074064-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-074064-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.24s)

                                                
                                    
x
+
TestPreload (166.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-902232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0229 01:24:10.694350  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-902232 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m28.514062021s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-902232 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-902232 image pull gcr.io/k8s-minikube/busybox: (1.186113693s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-902232
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-902232: (13.111500126s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-902232 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0229 01:24:57.863902  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-902232 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m2.40310794s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-902232 image list
helpers_test.go:175: Cleaning up "test-preload-902232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-902232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-902232: (1.047072622s)
--- PASS: TestPreload (166.47s)

                                                
                                    
x
+
TestScheduledStopUnix (117.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-751478 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-751478 --memory=2048 --driver=kvm2 : (45.527038243s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-751478 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-751478 -n scheduled-stop-751478
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-751478 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-751478 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-751478 -n scheduled-stop-751478
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-751478
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-751478 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0229 01:27:13.744846  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-751478
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-751478: exit status 7 (86.372738ms)

                                                
                                                
-- stdout --
	scheduled-stop-751478
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-751478 -n scheduled-stop-751478
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-751478 -n scheduled-stop-751478: exit status 7 (75.124189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-751478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-751478
--- PASS: TestScheduledStopUnix (117.27s)

                                                
                                    
x
+
TestSkaffold (143.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3293080239 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-250028 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-250028 --memory=2600 --driver=kvm2 : (49.310227886s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3293080239 run --minikube-profile skaffold-250028 --kube-context skaffold-250028 --status-check=true --port-forward=false --interactive=false
E0229 01:29:10.694801  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3293080239 run --minikube-profile skaffold-250028 --kube-context skaffold-250028 --status-check=true --port-forward=false --interactive=false: (1m21.429774908s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-69f467c6fc-89g2w" [ef01a4ea-8db1-4044-9882-16743d8b85c0] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004051933s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-756bf5b6fb-bjm9d" [ba483a7e-fc66-471f-9575-3e41d9f81701] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004145071s
helpers_test.go:175: Cleaning up "skaffold-250028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-250028
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-250028: (1.170639596s)
--- PASS: TestSkaffold (143.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (187.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4253479018 start -p running-upgrade-703383 --memory=2200 --vm-driver=kvm2 
E0229 01:29:57.863396  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4253479018 start -p running-upgrade-703383 --memory=2200 --vm-driver=kvm2 : (1m48.213067053s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-703383 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-703383 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m17.5409597s)
helpers_test.go:175: Cleaning up "running-upgrade-703383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-703383
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-703383: (1.206813083s)
--- PASS: TestRunningBinaryUpgrade (187.47s)

                                                
                                    
x
+
TestPause/serial/Start (92.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-712913 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-712913 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m32.13704759s)
--- PASS: TestPause/serial/Start (92.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (97.679434ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-548668] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18063
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548668 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548668 --driver=kvm2 : (1m38.267370277s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548668 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (74.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-712913 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-712913 --alsologtostderr -v=1 --driver=kvm2 : (1m14.593835887s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (74.62s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-712913 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --driver=kvm2 : (6.779317807s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-548668 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-548668 status -o json: exit status 2 (263.145256ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-548668","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-548668
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-712913 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-712913 --output=json --layout=cluster: exit status 2 (324.670369ms)

                                                
                                                
-- stdout --
	{"Name":"pause-712913","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-712913","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-712913 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-712913 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-712913 --alsologtostderr -v=5: (1.137901439s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-712913 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-712913 --alsologtostderr -v=5: (1.13764158s)
--- PASS: TestPause/serial/DeletePaused (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (6.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.2179057s)
--- PASS: TestPause/serial/VerifyDeletedResources (6.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548668 --no-kubernetes --driver=kvm2 : (29.519829961s)
--- PASS: TestNoKubernetes/serial/Start (29.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548668 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548668 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.793945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (74.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0229 01:34:10.694963  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1m8.242367919s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.020657833s)
--- PASS: TestNoKubernetes/serial/ProfileList (74.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-548668
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-548668: (2.160709347s)
--- PASS: TestNoKubernetes/serial/Stop (2.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (31.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-548668 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-548668 --driver=kvm2 : (31.843512114s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (31.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-548668 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-548668 "sudo systemctl is-active --quiet service kubelet": exit status 1 (226.327226ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (192.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2737979549 start -p stopped-upgrade-550506 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2737979549 start -p stopped-upgrade-550506 --memory=2200 --vm-driver=kvm2 : (1m27.284319212s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2737979549 -p stopped-upgrade-550506 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2737979549 -p stopped-upgrade-550506 stop: (13.177890247s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-550506 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-550506 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m32.231783534s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (192.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0229 01:37:29.577502  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:30.858179  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:33.418684  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:38.538940  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:37:48.779897  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m23.319674357s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-550506
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-550506: (1.237292812s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E0229 01:38:50.221480  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m22.379289828s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nj6lt" [46324c0d-8e3b-409b-8988-cbeb03782c8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nj6lt" [46324c0d-8e3b-409b-8988-cbeb03782c8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.025308718s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m44.578819817s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0229 01:39:42.887810  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m36.885735212s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-85ppp" [fb6ccef6-57ea-4714-a409-a2cd88f43916] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00782056s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jzwcw" [e72a861f-d711-435a-9590-e8c816a42833] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 01:39:57.863437  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jzwcw" [e72a861f-d711-435a-9590-e8c816a42833] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.004248571s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m17.942930114s)
--- PASS: TestNetworkPlugins/group/false/Start (77.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7zxrm" [cbe651fd-c738-4313-9362-21897a526071] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006978194s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fwgpk" [bffcebff-c231-4c47-aa73-48c30c79ca53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fwgpk" [bffcebff-c231-4c47-aa73-48c30c79ca53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005017516s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pl7mr" [8b4c0c13-ab7c-4a0a-b952-1295d4b51e1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pl7mr" [8b4c0c13-ab7c-4a0a-b952-1295d4b51e1d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003561698s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7s2zc" [cc3bed47-b2f1-4654-aee4-79dae77f8e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7s2zc" [cc3bed47-b2f1-4654-aee4-79dae77f8e9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.004935756s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m12.44642849s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m49.6714419s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (127.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (2m7.02846765s)
--- PASS: TestNetworkPlugins/group/bridge/Start (127.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (162.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E0229 01:42:28.298032  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:42:55.983068  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-579291 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (2m42.823727043s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (162.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2pw7p" [042fd822-b29b-45f1-8cbb-5466fdef10c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2pw7p" [042fd822-b29b-45f1-8cbb-5466fdef10c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006638051s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t8qpb" [dc329d49-3170-4f4f-9802-ebcbb625d920] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005460874s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hnbrg" [d0f7e8db-4d83-4e57-9828-c2053546246a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hnbrg" [d0f7e8db-4d83-4e57-9828-c2053546246a] Running
E0229 01:43:52.961362  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:52.966710  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:52.976995  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:52.997302  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:53.037607  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:53.118176  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:53.278649  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:53.599043  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:43:53.745401  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 01:43:54.239758  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.006750561s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nb9qj" [aea88bd8-baef-4272-846a-6ac6bebc75e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 01:43:58.081145  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nb9qj" [aea88bd8-baef-4272-846a-6ac6bebc75e8] Running
E0229 01:44:03.202336  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004872683s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0229 01:43:55.520353  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (87.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449532 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449532 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m27.205409719s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (87.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-384331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E0229 01:44:33.923425  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:44:42.886542  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:44:46.679939  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.685252  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.695529  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.715821  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.756159  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.836601  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:46.997087  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:47.317692  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:47.958357  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:49.238841  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:51.799355  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-384331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (1m27.466095069s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-579291 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-579291 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2sr4x" [cf72a270-ee82-4421-a398-b0399dfed2a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 01:44:56.920071  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:44:57.863086  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2sr4x" [cf72a270-ee82-4421-a398-b0399dfed2a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004435831s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-579291 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-579291 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0229 01:54:42.886965  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/skaffold-250028/client.crt: no such file or directory
E0229 01:54:46.679925  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:54:55.517246  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:54:57.863982  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:55:23.201594  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:56:05.332842  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-308557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E0229 01:45:27.641694  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-308557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m9.507805539s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-449532 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cfd4c5e-cbcc-4ad2-94f4-b7084e286067] Pending
helpers_test.go:344: "busybox" [9cfd4c5e-cbcc-4ad2-94f4-b7084e286067] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cfd4c5e-cbcc-4ad2-94f4-b7084e286067] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005483302s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-449532 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-449532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-449532 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-449532 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-449532 --alsologtostderr -v=3: (13.178868655s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-384331 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3255edb3-d620-4c91-8353-653cb9eea0fd] Pending
helpers_test.go:344: "busybox" [3255edb3-d620-4c91-8353-653cb9eea0fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3255edb3-d620-4c91-8353-653cb9eea0fd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005312539s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-384331 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-384331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-384331 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018486044s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-384331 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-384331 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-384331 --alsologtostderr -v=3: (13.156834928s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449532 -n no-preload-449532
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449532 -n no-preload-449532: exit status 7 (75.862411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-449532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (602.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449532 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0229 01:46:05.332308  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.337599  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.347833  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.368207  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.408545  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.488891  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.649610  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:05.970753  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:06.611155  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:07.892195  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:08.601948  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:46:10.453424  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:11.262698  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.267967  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.278298  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.298649  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.339037  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.419413  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.579862  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:11.901008  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:12.541929  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:13.823006  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:15.574190  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449532 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (10m2.536564428s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449532 -n no-preload-449532
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (602.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-384331 -n embed-certs-384331
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-384331 -n embed-certs-384331: exit status 7 (77.718529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-384331 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (340.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-384331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E0229 01:46:16.383245  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:21.504295  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:25.814795  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:31.745110  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-384331 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (5m40.185935264s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-384331 -n embed-certs-384331
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (340.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-308557 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac4c3ee6-baee-4f74-aba8-8e1aa940d22e] Pending
helpers_test.go:344: "busybox" [ac4c3ee6-baee-4f74-aba8-8e1aa940d22e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0229 01:46:36.805363  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ac4c3ee6-baee-4f74-aba8-8e1aa940d22e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004165425s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-308557 exec busybox -- /bin/sh -c "ulimit -n"
E0229 01:46:44.239209  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.244489  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.254712  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.275058  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-308557 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0229 01:46:44.316133  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.397164  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.558062  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:44.878323  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-308557 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004411914s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-308557 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-308557 --alsologtostderr -v=3
E0229 01:46:45.519113  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:46.295803  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:46:46.799599  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:49.360463  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:46:52.225386  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:46:54.481512  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-308557 --alsologtostderr -v=3: (13.127886436s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557: exit status 7 (75.175598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-308557 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (592.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-308557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E0229 01:47:04.722454  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:47:25.203024  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:47:27.256457  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/calico-579291/client.crt: no such file or directory
E0229 01:47:28.297945  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:47:30.522283  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kindnet-579291/client.crt: no such file or directory
E0229 01:47:33.185871  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
E0229 01:47:57.029295  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.034608  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.044875  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.065151  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.105401  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.185742  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.346223  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:57.667383  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:58.307752  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:47:59.588611  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:48:02.149345  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:48:06.163303  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/false-579291/client.crt: no such file or directory
E0229 01:48:07.269874  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-308557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (9m52.018744311s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (592.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-096771 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-096771 --alsologtostderr -v=3: (2.176540743s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-096771 -n old-k8s-version-096771: exit status 7 (76.722545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-096771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hqqs6" [5db1c150-3c1f-4aa0-961e-435fc518378d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0056015s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hqqs6" [5db1c150-3c1f-4aa0-961e-435fc518378d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004874041s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-384331 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-384331 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-384331 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-384331 -n embed-certs-384331
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-384331 -n embed-certs-384331: exit status 2 (252.705597ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-384331 -n embed-certs-384331
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-384331 -n embed-certs-384331: exit status 2 (275.509737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-384331 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-384331 -n embed-certs-384331
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-384331 -n embed-certs-384331
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-133807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0229 01:52:28.297650  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:52:39.361309  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/kubenet-579291/client.crt: no such file or directory
E0229 01:52:57.028703  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-133807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m10.523382719s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-133807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-133807 --alsologtostderr -v=3
E0229 01:53:24.713970  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/enable-default-cni-579291/client.crt: no such file or directory
E0229 01:53:35.606028  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-133807 --alsologtostderr -v=3: (13.128772342s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-133807 -n newest-cni-133807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-133807 -n newest-cni-133807: exit status 7 (81.337887ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-133807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-133807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0229 01:53:51.344178  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/gvisor-335344/client.crt: no such file or directory
E0229 01:53:52.961809  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/auto-579291/client.crt: no such file or directory
E0229 01:53:55.670903  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
E0229 01:54:03.291119  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/flannel-579291/client.crt: no such file or directory
E0229 01:54:10.695113  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-133807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (45.579026628s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-133807 -n newest-cni-133807
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-133807 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-133807 --alsologtostderr -v=1
E0229 01:54:23.354564  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/bridge-579291/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-133807 -n newest-cni-133807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-133807 -n newest-cni-133807: exit status 2 (274.69839ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-133807 -n newest-cni-133807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-133807 -n newest-cni-133807: exit status 2 (267.384486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-133807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-133807 -n newest-cni-133807
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-133807 -n newest-cni-133807
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s7f6s" [f5ff9761-7ae6-4436-9663-59398b4b43f6] Running
E0229 01:56:11.262748  122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/custom-flannel-579291/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004942123s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s7f6s" [f5ff9761-7ae6-4436-9663-59398b4b43f6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005068923s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-449532 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449532 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-449532 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449532 -n no-preload-449532
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449532 -n no-preload-449532: exit status 2 (254.477125ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449532 -n no-preload-449532
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449532 -n no-preload-449532: exit status 2 (256.251431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-449532 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449532 -n no-preload-449532
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449532 -n no-preload-449532
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jpht2" [6afa88d3-2477-4c8e-8d52-b6821fb24ec7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004791306s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jpht2" [6afa88d3-2477-4c8e-8d52-b6821fb24ec7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004692208s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-308557 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-308557 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-308557 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557: exit status 2 (251.0148ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557: exit status 2 (237.367539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-308557 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-308557 -n default-k8s-diff-port-308557
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    

Test skip (29/332)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-579291 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-579291" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-579291

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-579291" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-579291"

                                                
                                                
----------------------- debugLogs end: cilium-579291 [took: 4.002062118s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-579291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-579291
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-103383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-103383
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard