Test Report: KVM_Linux 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (10/330)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (405.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E0229 17:47:53.224104   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:50:09.379318   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:50:37.066082   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:51:00.470055   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.475397   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.485734   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.506026   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.546321   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.626705   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.787147   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:01.107713   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:01.748097   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:03.028627   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:05.589423   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:10.710450   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:20.950813   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:41.431124   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:52:22.392299   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:53:44.315783   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : exit status 109 (6m45.217243631s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-924574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node ingress-addon-legacy-924574 in cluster ingress-addon-legacy-924574
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892   51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541   51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912   51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:47:48.400479   22364 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:47:48.400569   22364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:47:48.400574   22364 out.go:304] Setting ErrFile to fd 2...
	I0229 17:47:48.400582   22364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:47:48.400772   22364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:47:48.401330   22364 out.go:298] Setting JSON to false
	I0229 17:47:48.402202   22364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1819,"bootTime":1709227050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:47:48.402273   22364 start.go:139] virtualization: kvm guest
	I0229 17:47:48.404649   22364 out.go:177] * [ingress-addon-legacy-924574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:47:48.406356   22364 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:47:48.406319   22364 notify.go:220] Checking for updates...
	I0229 17:47:48.407700   22364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:47:48.409197   22364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:47:48.410575   22364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:47:48.411886   22364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:47:48.413346   22364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:47:48.414987   22364 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:47:48.448716   22364 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 17:47:48.450042   22364 start.go:299] selected driver: kvm2
	I0229 17:47:48.450052   22364 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:47:48.450062   22364 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:47:48.450823   22364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:47:48.450918   22364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:47:48.465761   22364 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:47:48.465811   22364 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:47:48.466033   22364 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 17:47:48.466098   22364 cni.go:84] Creating CNI manager for ""
	I0229 17:47:48.466117   22364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 17:47:48.466126   22364 start_flags.go:323] config:
	{Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:47:48.466253   22364 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:47:48.468923   22364 out.go:177] * Starting control plane node ingress-addon-legacy-924574 in cluster ingress-addon-legacy-924574
	I0229 17:47:48.470152   22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 17:47:48.494230   22364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 17:47:48.494259   22364 cache.go:56] Caching tarball of preloaded images
	I0229 17:47:48.494407   22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 17:47:48.496128   22364 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 17:47:48.497477   22364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:47:48.522027   22364 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 17:47:52.100633   22364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:47:52.100743   22364 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:47:52.880821   22364 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 17:47:52.881140   22364 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json ...
	I0229 17:47:52.881167   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json: {Name:mkf578002dea33b0c8dc25c2275a8c4958179e8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:47:52.881347   22364 start.go:365] acquiring machines lock for ingress-addon-legacy-924574: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:47:52.881389   22364 start.go:369] acquired machines lock for "ingress-addon-legacy-924574" in 20.69µs
	I0229 17:47:52.881411   22364 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 17:47:52.881503   22364 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 17:47:52.883763   22364 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 17:47:52.883906   22364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:47:52.883949   22364 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:47:52.898092   22364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0229 17:47:52.898549   22364 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:47:52.899063   22364 main.go:141] libmachine: Using API Version  1
	I0229 17:47:52.899083   22364 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:47:52.899437   22364 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:47:52.899601   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
	I0229 17:47:52.899753   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:47:52.899880   22364 start.go:159] libmachine.API.Create for "ingress-addon-legacy-924574" (driver="kvm2")
	I0229 17:47:52.899910   22364 client.go:168] LocalClient.Create starting
	I0229 17:47:52.899944   22364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem
	I0229 17:47:52.899981   22364 main.go:141] libmachine: Decoding PEM data...
	I0229 17:47:52.900002   22364 main.go:141] libmachine: Parsing certificate...
	I0229 17:47:52.900071   22364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem
	I0229 17:47:52.900096   22364 main.go:141] libmachine: Decoding PEM data...
	I0229 17:47:52.900115   22364 main.go:141] libmachine: Parsing certificate...
	I0229 17:47:52.900140   22364 main.go:141] libmachine: Running pre-create checks...
	I0229 17:47:52.900153   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .PreCreateCheck
	I0229 17:47:52.900452   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
	I0229 17:47:52.900781   22364 main.go:141] libmachine: Creating machine...
	I0229 17:47:52.900795   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Create
	I0229 17:47:52.900895   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating KVM machine...
	I0229 17:47:52.902080   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found existing default KVM network
	I0229 17:47:52.902744   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:52.902622   22398 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0229 17:47:52.907905   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | trying to create private KVM network mk-ingress-addon-legacy-924574 192.168.39.0/24...
	I0229 17:47:52.972196   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | private KVM network mk-ingress-addon-legacy-924574 192.168.39.0/24 created
	I0229 17:47:52.972224   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting up store path in /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 ...
	I0229 17:47:52.972242   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:52.972190   22398 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:47:52.972261   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Building disk image from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:47:52.972332   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Downloading /home/jenkins/minikube-integration/18259-6402/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 17:47:53.190763   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.190620   22398 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa...
	I0229 17:47:53.367655   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.367530   22398 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/ingress-addon-legacy-924574.rawdisk...
	I0229 17:47:53.367696   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Writing magic tar header
	I0229 17:47:53.367710   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Writing SSH key tar header
	I0229 17:47:53.367719   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.367669   22398 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 ...
	I0229 17:47:53.367796   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574
	I0229 17:47:53.367856   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 (perms=drwx------)
	I0229 17:47:53.367882   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines
	I0229 17:47:53.367899   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines (perms=drwxr-xr-x)
	I0229 17:47:53.367913   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:47:53.367930   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402
	I0229 17:47:53.367943   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 17:47:53.367957   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins
	I0229 17:47:53.367971   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube (perms=drwxr-xr-x)
	I0229 17:47:53.367988   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402 (perms=drwxrwxr-x)
	I0229 17:47:53.368001   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 17:47:53.368010   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 17:47:53.368020   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating domain...
	I0229 17:47:53.368033   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home
	I0229 17:47:53.368046   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Skipping /home - not owner
	I0229 17:47:53.369094   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) define libvirt domain using xml: 
	I0229 17:47:53.369109   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <domain type='kvm'>
	I0229 17:47:53.369116   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <name>ingress-addon-legacy-924574</name>
	I0229 17:47:53.369121   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <memory unit='MiB'>4096</memory>
	I0229 17:47:53.369132   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <vcpu>2</vcpu>
	I0229 17:47:53.369136   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <features>
	I0229 17:47:53.369141   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <acpi/>
	I0229 17:47:53.369146   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <apic/>
	I0229 17:47:53.369150   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <pae/>
	I0229 17:47:53.369154   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     
	I0229 17:47:53.369159   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   </features>
	I0229 17:47:53.369164   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <cpu mode='host-passthrough'>
	I0229 17:47:53.369169   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   
	I0229 17:47:53.369173   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   </cpu>
	I0229 17:47:53.369179   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <os>
	I0229 17:47:53.369193   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <type>hvm</type>
	I0229 17:47:53.369211   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <boot dev='cdrom'/>
	I0229 17:47:53.369227   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <boot dev='hd'/>
	I0229 17:47:53.369234   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <bootmenu enable='no'/>
	I0229 17:47:53.369239   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   </os>
	I0229 17:47:53.369247   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   <devices>
	I0229 17:47:53.369253   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <disk type='file' device='cdrom'>
	I0229 17:47:53.369267   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/boot2docker.iso'/>
	I0229 17:47:53.369275   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <target dev='hdc' bus='scsi'/>
	I0229 17:47:53.369290   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <readonly/>
	I0229 17:47:53.369295   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </disk>
	I0229 17:47:53.369308   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <disk type='file' device='disk'>
	I0229 17:47:53.369324   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 17:47:53.369340   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/ingress-addon-legacy-924574.rawdisk'/>
	I0229 17:47:53.369347   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <target dev='hda' bus='virtio'/>
	I0229 17:47:53.369353   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </disk>
	I0229 17:47:53.369359   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <interface type='network'>
	I0229 17:47:53.369365   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <source network='mk-ingress-addon-legacy-924574'/>
	I0229 17:47:53.369370   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <model type='virtio'/>
	I0229 17:47:53.369376   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </interface>
	I0229 17:47:53.369381   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <interface type='network'>
	I0229 17:47:53.369392   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <source network='default'/>
	I0229 17:47:53.369401   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <model type='virtio'/>
	I0229 17:47:53.369407   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </interface>
	I0229 17:47:53.369412   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <serial type='pty'>
	I0229 17:47:53.369425   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <target port='0'/>
	I0229 17:47:53.369432   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </serial>
	I0229 17:47:53.369438   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <console type='pty'>
	I0229 17:47:53.369446   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <target type='serial' port='0'/>
	I0229 17:47:53.369452   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </console>
	I0229 17:47:53.369460   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     <rng model='virtio'>
	I0229 17:47:53.369467   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)       <backend model='random'>/dev/random</backend>
	I0229 17:47:53.369474   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     </rng>
	I0229 17:47:53.369480   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     
	I0229 17:47:53.369486   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)     
	I0229 17:47:53.369491   22364 main.go:141] libmachine: (ingress-addon-legacy-924574)   </devices>
	I0229 17:47:53.369497   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </domain>
	I0229 17:47:53.369504   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) 
	I0229 17:47:53.374000   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:67:27:48 in network default
	I0229 17:47:53.374513   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring networks are active...
	I0229 17:47:53.374526   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:53.375128   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring network default is active
	I0229 17:47:53.375378   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring network mk-ingress-addon-legacy-924574 is active
	I0229 17:47:53.375875   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Getting domain xml...
	I0229 17:47:53.376549   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating domain...
	I0229 17:47:54.558157   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Waiting to get IP...
	I0229 17:47:54.558852   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:54.559215   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:54.559253   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:54.559189   22398 retry.go:31] will retry after 300.043204ms: waiting for machine to come up
	I0229 17:47:54.860810   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:54.861165   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:54.861190   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:54.861124   22398 retry.go:31] will retry after 262.098032ms: waiting for machine to come up
	I0229 17:47:55.124489   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:55.124864   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:55.124895   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:55.124851   22398 retry.go:31] will retry after 448.178434ms: waiting for machine to come up
	I0229 17:47:55.574434   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:55.574830   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:55.574854   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:55.574788   22398 retry.go:31] will retry after 533.788809ms: waiting for machine to come up
	I0229 17:47:56.110641   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:56.111052   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:56.111078   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:56.111001   22398 retry.go:31] will retry after 695.183136ms: waiting for machine to come up
	I0229 17:47:56.808182   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:56.808548   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:56.808573   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:56.808506   22398 retry.go:31] will retry after 775.846643ms: waiting for machine to come up
	I0229 17:47:57.585650   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:57.586067   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:57.586096   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:57.586006   22398 retry.go:31] will retry after 1.082583506s: waiting for machine to come up
	I0229 17:47:58.669813   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:58.670199   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:58.670228   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:58.670143   22398 retry.go:31] will retry after 1.065634662s: waiting for machine to come up
	I0229 17:47:59.737054   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:47:59.737554   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:47:59.737587   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:59.737483   22398 retry.go:31] will retry after 1.165608856s: waiting for machine to come up
	I0229 17:48:00.904729   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:00.905063   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:48:00.905089   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:00.905029   22398 retry.go:31] will retry after 1.755378706s: waiting for machine to come up
	I0229 17:48:02.662894   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:02.663270   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:48:02.663301   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:02.663214   22398 retry.go:31] will retry after 2.878131769s: waiting for machine to come up
	I0229 17:48:05.544646   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:05.545053   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:48:05.545084   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:05.545002   22398 retry.go:31] will retry after 3.364383273s: waiting for machine to come up
	I0229 17:48:08.910792   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:08.911302   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:48:08.911333   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:08.911251   22398 retry.go:31] will retry after 2.832000314s: waiting for machine to come up
	I0229 17:48:11.746210   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:11.746594   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
	I0229 17:48:11.746625   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:11.746539   22398 retry.go:31] will retry after 3.45619964s: waiting for machine to come up
	I0229 17:48:15.205428   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:15.205939   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Found IP for machine: 192.168.39.8
	I0229 17:48:15.205959   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has current primary IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:15.205965   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Reserving static IP address...
	I0229 17:48:15.206304   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-924574", mac: "52:54:00:90:77:95", ip: "192.168.39.8"} in network mk-ingress-addon-legacy-924574
	I0229 17:48:15.279036   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Getting to WaitForSSH function...
	I0229 17:48:15.279070   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Reserved static IP address: 192.168.39.8
	I0229 17:48:15.279083   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Waiting for SSH to be available...
	I0229 17:48:15.281683   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:15.282007   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574
	I0229 17:48:15.282105   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-924574 interface with MAC address 52:54:00:90:77:95
	I0229 17:48:15.282292   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH client type: external
	I0229 17:48:15.282314   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa (-rw-------)
	I0229 17:48:15.282353   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:48:15.282367   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | About to run SSH command:
	I0229 17:48:15.282394   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | exit 0
	I0229 17:48:15.286152   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | SSH cmd err, output: exit status 255: 
	I0229 17:48:15.286178   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 17:48:15.286285   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | command : exit 0
	I0229 17:48:15.286306   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | err     : exit status 255
	I0229 17:48:15.286320   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | output  : 
	I0229 17:48:18.287257   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Getting to WaitForSSH function...
	I0229 17:48:18.289512   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.289949   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.289975   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.290082   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH client type: external
	I0229 17:48:18.290113   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa (-rw-------)
	I0229 17:48:18.290143   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 17:48:18.290157   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | About to run SSH command:
	I0229 17:48:18.290179   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | exit 0
	I0229 17:48:18.415496   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | SSH cmd err, output: <nil>: 
	I0229 17:48:18.415778   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) KVM machine creation complete!
	I0229 17:48:18.416106   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
	I0229 17:48:18.416613   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:18.416832   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:18.416990   22364 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 17:48:18.417003   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
	I0229 17:48:18.418310   22364 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 17:48:18.418330   22364 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 17:48:18.418337   22364 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 17:48:18.418347   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.420525   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.420864   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.420895   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.421007   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:18.421193   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.421362   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.421503   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:18.421690   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:18.421869   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:18.421879   22364 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 17:48:18.519376   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:48:18.519408   22364 main.go:141] libmachine: Detecting the provisioner...
	I0229 17:48:18.519419   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.522184   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.522600   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.522624   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.522778   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:18.522974   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.523187   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.523356   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:18.523536   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:18.523738   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:18.523752   22364 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 17:48:18.620466   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 17:48:18.620554   22364 main.go:141] libmachine: found compatible host: buildroot
	I0229 17:48:18.620566   22364 main.go:141] libmachine: Provisioning with buildroot...
	I0229 17:48:18.620580   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
	I0229 17:48:18.620874   22364 buildroot.go:166] provisioning hostname "ingress-addon-legacy-924574"
	I0229 17:48:18.620905   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
	I0229 17:48:18.621128   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.623573   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.623947   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.623976   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.624075   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:18.624289   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.624444   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.624583   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:18.624706   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:18.624904   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:18.624918   22364 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-924574 && echo "ingress-addon-legacy-924574" | sudo tee /etc/hostname
	I0229 17:48:18.734459   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-924574
	
	I0229 17:48:18.734483   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.737191   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.737519   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.737551   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.737782   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:18.737981   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.738137   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.738269   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:18.738425   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:18.738589   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:18.738608   22364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-924574' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-924574/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-924574' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:48:18.844725   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:48:18.844755   22364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 17:48:18.844789   22364 buildroot.go:174] setting up certificates
	I0229 17:48:18.844797   22364 provision.go:83] configureAuth start
	I0229 17:48:18.844807   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
	I0229 17:48:18.845087   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
	I0229 17:48:18.847576   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.847948   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.847984   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.848113   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.850264   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.850481   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.850505   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.850615   22364 provision.go:138] copyHostCerts
	I0229 17:48:18.850655   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 17:48:18.850690   22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 17:48:18.850722   22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 17:48:18.850796   22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 17:48:18.850866   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 17:48:18.850884   22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 17:48:18.850889   22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 17:48:18.850912   22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 17:48:18.850952   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 17:48:18.850968   22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 17:48:18.850974   22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 17:48:18.850993   22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 17:48:18.851036   22364 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-924574 san=[192.168.39.8 192.168.39.8 localhost 127.0.0.1 minikube ingress-addon-legacy-924574]
	I0229 17:48:18.906404   22364 provision.go:172] copyRemoteCerts
	I0229 17:48:18.906458   22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:48:18.906480   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:18.908930   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.909217   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:18.909252   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:18.909398   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:18.909551   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:18.909721   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:18.909839   22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:48:18.990635   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0229 17:48:18.990719   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 17:48:19.015341   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0229 17:48:19.015401   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0229 17:48:19.038541   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0229 17:48:19.038614   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 17:48:19.061830   22364 provision.go:86] duration metric: configureAuth took 217.020562ms
	I0229 17:48:19.061858   22364 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:48:19.062053   22364 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:48:19.062077   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:19.062353   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:19.064969   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.065252   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:19.065277   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.065419   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:19.065583   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.065755   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.065893   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:19.066034   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:19.066228   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:19.066241   22364 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 17:48:19.165237   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 17:48:19.165256   22364 buildroot.go:70] root file system type: tmpfs
	I0229 17:48:19.165382   22364 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 17:48:19.165404   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:19.167907   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.168242   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:19.168270   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.168469   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:19.168669   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.168867   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.168982   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:19.169158   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:19.169355   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:19.169448   22364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 17:48:19.282964   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 17:48:19.283002   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:19.285498   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.285816   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:19.285855   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:19.285981   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:19.286140   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.286268   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:19.286372   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:19.286534   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:19.286741   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:19.286783   22364 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 17:48:20.063969   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 17:48:20.063996   22364 main.go:141] libmachine: Checking connection to Docker...
	I0229 17:48:20.064005   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetURL
	I0229 17:48:20.065269   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using libvirt version 6000000
	I0229 17:48:20.067491   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.067830   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.067872   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.068044   22364 main.go:141] libmachine: Docker is up and running!
	I0229 17:48:20.068055   22364 main.go:141] libmachine: Reticulating splines...
	I0229 17:48:20.068060   22364 client.go:171] LocalClient.Create took 27.168141318s
	I0229 17:48:20.068081   22364 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-924574" took 27.168201447s
	I0229 17:48:20.068095   22364 start.go:300] post-start starting for "ingress-addon-legacy-924574" (driver="kvm2")
	I0229 17:48:20.068108   22364 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:48:20.068130   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:20.068376   22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:48:20.068408   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:20.070461   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.070818   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.070839   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.070973   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:20.071152   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:20.071327   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:20.071468   22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:48:20.150272   22364 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:48:20.154549   22364 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 17:48:20.154574   22364 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 17:48:20.154635   22364 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 17:48:20.154748   22364 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 17:48:20.154762   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> /etc/ssl/certs/136052.pem
	I0229 17:48:20.154841   22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 17:48:20.164147   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 17:48:20.189357   22364 start.go:303] post-start completed in 121.249815ms
	I0229 17:48:20.189401   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
	I0229 17:48:20.189950   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
	I0229 17:48:20.192285   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.192647   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.192675   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.192905   22364 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json ...
	I0229 17:48:20.193078   22364 start.go:128] duration metric: createHost completed in 27.311563843s
	I0229 17:48:20.193109   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:20.195031   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.195321   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.195341   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.195456   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:20.195619   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:20.195770   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:20.195928   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:20.196081   22364 main.go:141] libmachine: Using SSH client type: native
	I0229 17:48:20.196260   22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0229 17:48:20.196276   22364 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 17:48:20.292350   22364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709228900.269966997
	
	I0229 17:48:20.292382   22364 fix.go:206] guest clock: 1709228900.269966997
	I0229 17:48:20.292400   22364 fix.go:219] Guest: 2024-02-29 17:48:20.269966997 +0000 UTC Remote: 2024-02-29 17:48:20.193091996 +0000 UTC m=+31.837318159 (delta=76.875001ms)
	I0229 17:48:20.292434   22364 fix.go:190] guest clock delta is within tolerance: 76.875001ms
	I0229 17:48:20.292448   22364 start.go:83] releasing machines lock for "ingress-addon-legacy-924574", held for 27.411048515s
	I0229 17:48:20.292478   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:20.292738   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
	I0229 17:48:20.295253   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.295630   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.295675   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.295853   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:20.296338   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:20.296491   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:48:20.296573   22364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:48:20.296618   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:20.296668   22364 ssh_runner.go:195] Run: cat /version.json
	I0229 17:48:20.296691   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:48:20.299222   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.299474   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.299505   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.299529   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.299652   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:20.299831   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:20.299868   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:20.299891   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:20.300051   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:20.300068   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:48:20.300212   22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:48:20.300267   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:48:20.300415   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:48:20.300532   22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:48:20.372910   22364 ssh_runner.go:195] Run: systemctl --version
	I0229 17:48:20.398144   22364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 17:48:20.404022   22364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 17:48:20.404085   22364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 17:48:20.414117   22364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 17:48:20.432626   22364 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 17:48:20.432658   22364 start.go:475] detecting cgroup driver to use...
	I0229 17:48:20.432789   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:48:20.459183   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 17:48:20.471721   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 17:48:20.482648   22364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 17:48:20.482707   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 17:48:20.494638   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:48:20.506324   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 17:48:20.517854   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:48:20.529520   22364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:48:20.541075   22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 17:48:20.552598   22364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:48:20.563037   22364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:48:20.573341   22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:48:20.693104   22364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 17:48:20.718809   22364 start.go:475] detecting cgroup driver to use...
	I0229 17:48:20.718906   22364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 17:48:20.734265   22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:48:20.749388   22364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 17:48:20.769483   22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:48:20.784497   22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:48:20.801154   22364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 17:48:20.831855   22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:48:20.845834   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:48:20.865525   22364 ssh_runner.go:195] Run: which cri-dockerd
	I0229 17:48:20.869664   22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 17:48:20.879404   22364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 17:48:20.896615   22364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 17:48:21.016895   22364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 17:48:21.146479   22364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 17:48:21.146625   22364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 17:48:21.164759   22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:48:21.277078   22364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 17:48:23.116461   22364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.839342924s)
	I0229 17:48:23.116556   22364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:48:23.142234   22364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:48:23.168826   22364 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0229 17:48:23.168872   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
	I0229 17:48:23.171343   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:23.171609   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:48:23.171657   22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:48:23.171847   22364 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0229 17:48:23.176510   22364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:48:23.190409   22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 17:48:23.190494   22364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:48:23.208199   22364 docker.go:685] Got preloaded images: 
	I0229 17:48:23.208223   22364 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 17:48:23.208281   22364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 17:48:23.218614   22364 ssh_runner.go:195] Run: which lz4
	I0229 17:48:23.222673   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 17:48:23.222745   22364 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 17:48:23.226775   22364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 17:48:23.226798   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 17:48:24.765370   22364 docker.go:649] Took 1.542630 seconds to copy over tarball
	I0229 17:48:24.765456   22364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 17:48:27.058565   22364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.293080317s)
	I0229 17:48:27.058590   22364 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 17:48:27.098240   22364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 17:48:27.108908   22364 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 17:48:27.126374   22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:48:27.247283   22364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 17:48:31.458576   22364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.211256283s)
	I0229 17:48:31.458686   22364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:48:31.480757   22364 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 17:48:31.480779   22364 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 17:48:31.480788   22364 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 17:48:31.482625   22364 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:48:31.482637   22364 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:48:31.482625   22364 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:48:31.482679   22364 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 17:48:31.482624   22364 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:48:31.482633   22364 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:48:31.482633   22364 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 17:48:31.482634   22364 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:48:31.483250   22364 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:48:31.483453   22364 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 17:48:31.483478   22364 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:48:31.483511   22364 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:48:31.483453   22364 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:48:31.483454   22364 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:48:31.483506   22364 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 17:48:31.483800   22364 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:48:31.637956   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 17:48:31.656178   22364 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 17:48:31.656219   22364 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 17:48:31.656255   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 17:48:31.662439   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 17:48:31.670481   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:48:31.673916   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:48:31.675358   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:48:31.679378   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0229 17:48:31.688296   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 17:48:31.692235   22364 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 17:48:31.692275   22364 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 17:48:31.692309   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 17:48:31.698545   22364 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 17:48:31.698582   22364 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:48:31.698615   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 17:48:31.727829   22364 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 17:48:31.727886   22364 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:48:31.727925   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 17:48:31.729724   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:48:31.735523   22364 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 17:48:31.735558   22364 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:48:31.735595   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 17:48:31.740748   22364 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 17:48:31.740787   22364 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 17:48:31.740828   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 17:48:31.757096   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0229 17:48:31.759089   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0229 17:48:31.794650   22364 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 17:48:31.794696   22364 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:48:31.794734   22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 17:48:31.794755   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0229 17:48:31.798077   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0229 17:48:31.802572   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0229 17:48:31.816369   22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0229 17:48:32.049650   22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:48:32.070695   22364 cache_images.go:92] LoadImages completed in 589.892517ms
	W0229 17:48:32.070762   22364 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0229 17:48:32.070826   22364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 17:48:32.098167   22364 cni.go:84] Creating CNI manager for ""
	I0229 17:48:32.098188   22364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 17:48:32.098207   22364 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:48:32.098223   22364 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-924574 NodeName:ingress-addon-legacy-924574 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 17:48:32.098348   22364 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-924574"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:48:32.098416   22364 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-924574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 17:48:32.098471   22364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 17:48:32.108993   22364 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:48:32.109066   22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:48:32.119451   22364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0229 17:48:32.136865   22364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 17:48:32.154638   22364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0229 17:48:32.172052   22364 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0229 17:48:32.176244   22364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:48:32.189028   22364 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574 for IP: 192.168.39.8
	I0229 17:48:32.189061   22364 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.189235   22364 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 17:48:32.189311   22364 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 17:48:32.189360   22364 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key
	I0229 17:48:32.189379   22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt with IP's: []
	I0229 17:48:32.377815   22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt ...
	I0229 17:48:32.377844   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt: {Name:mk320b3274c2bb1527f295851eb825e478f7263b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.378022   22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key ...
	I0229 17:48:32.378041   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key: {Name:mk9789f5ea3b75ed9f1801ad0fb12835210feb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.378148   22364 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5
	I0229 17:48:32.378172   22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 with IP's: [192.168.39.8 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 17:48:32.587312   22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 ...
	I0229 17:48:32.587344   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5: {Name:mk68443d67bd671a96b725e56ab8e6b1af8d018e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.587515   22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5 ...
	I0229 17:48:32.587532   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5: {Name:mk956713b0c102c8329150e00fc994d8a0d1aff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.587661   22364 certs.go:337] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt
	I0229 17:48:32.587773   22364 certs.go:341] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key
	I0229 17:48:32.587851   22364 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key
	I0229 17:48:32.587872   22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt with IP's: []
	I0229 17:48:32.825207   22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt ...
	I0229 17:48:32.825239   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt: {Name:mk2f4c19c78da2dc09d24a86847ccea004b24dd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.825411   22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key ...
	I0229 17:48:32.825427   22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key: {Name:mk92ee93b316f69f56781dc42d0a9e7568b1ed33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:48:32.825527   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 17:48:32.825548   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 17:48:32.825579   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 17:48:32.825599   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 17:48:32.825614   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 17:48:32.825627   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 17:48:32.825636   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 17:48:32.825645   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 17:48:32.825721   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 17:48:32.825755   22364 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 17:48:32.825765   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 17:48:32.825800   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 17:48:32.825831   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 17:48:32.825865   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 17:48:32.825922   22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 17:48:32.825963   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem -> /usr/share/ca-certificates/13605.pem
	I0229 17:48:32.825982   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> /usr/share/ca-certificates/136052.pem
	I0229 17:48:32.826000   22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:48:32.826623   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:48:32.852579   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 17:48:32.876773   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:48:32.901419   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 17:48:32.926327   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:48:32.950585   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 17:48:32.975576   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:48:32.999863   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 17:48:33.023940   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 17:48:33.048437   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 17:48:33.072590   22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:48:33.096707   22364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:48:33.113652   22364 ssh_runner.go:195] Run: openssl version
	I0229 17:48:33.119598   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:48:33.130830   22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:48:33.135603   22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:48:33.135683   22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:48:33.141413   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:48:33.153340   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 17:48:33.164947   22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 17:48:33.169594   22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 17:48:33.169645   22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 17:48:33.175332   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 17:48:33.186715   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 17:48:33.198283   22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 17:48:33.202935   22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 17:48:33.202987   22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 17:48:33.208855   22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 17:48:33.220230   22364 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:48:33.224579   22364 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 17:48:33.224633   22364 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:48:33.224748   22364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 17:48:33.241898   22364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:48:33.253109   22364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 17:48:33.263577   22364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:48:33.273615   22364 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:48:33.273663   22364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:48:33.330498   22364 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:48:33.330584   22364 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:48:33.530811   22364 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:48:33.530939   22364 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:48:33.531044   22364 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:48:33.691581   22364 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:48:33.692582   22364 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:48:33.692652   22364 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:48:33.831035   22364 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:48:33.833210   22364 out.go:204]   - Generating certificates and keys ...
	I0229 17:48:33.836016   22364 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:48:33.836135   22364 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:48:33.970431   22364 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 17:48:34.126323   22364 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 17:48:34.248746   22364 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 17:48:34.392101   22364 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 17:48:34.621483   22364 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 17:48:34.621794   22364 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0229 17:48:34.915940   22364 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 17:48:34.916152   22364 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	I0229 17:48:35.179651   22364 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 17:48:35.356070   22364 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 17:48:35.483977   22364 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 17:48:35.484165   22364 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:48:35.595362   22364 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:48:35.826575   22364 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:48:36.056336   22364 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:48:36.508366   22364 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:48:36.509059   22364 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:48:36.511102   22364 out.go:204]   - Booting up control plane ...
	I0229 17:48:36.511208   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:48:36.528846   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:48:36.528978   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:48:36.531649   22364 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:48:36.532382   22364 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:49:16.529546   22364 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:52:36.530209   22364 kubeadm.go:322] 
	I0229 17:52:36.530290   22364 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:52:36.530330   22364 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:52:36.530336   22364 kubeadm.go:322] 
	I0229 17:52:36.530381   22364 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:52:36.530415   22364 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:52:36.530599   22364 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:52:36.530637   22364 kubeadm.go:322] 
	I0229 17:52:36.530761   22364 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:52:36.530815   22364 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:52:36.530857   22364 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:52:36.530865   22364 kubeadm.go:322] 
	I0229 17:52:36.530989   22364 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:52:36.531092   22364 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:52:36.531103   22364 kubeadm.go:322] 
	I0229 17:52:36.531203   22364 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 17:52:36.531273   22364 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 17:52:36.531360   22364 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:52:36.531412   22364 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 17:52:36.531422   22364 kubeadm.go:322] 
	I0229 17:52:36.532053   22364 kubeadm.go:322] W0229 17:48:33.309569    1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:52:36.532326   22364 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 17:52:36.532517   22364 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 17:52:36.532620   22364 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:52:36.532740   22364 kubeadm.go:322] W0229 17:48:36.501119    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:52:36.532846   22364 kubeadm.go:322] W0229 17:48:36.508719    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:52:36.532917   22364 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:52:36.532975   22364 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 17:52:36.533150   22364 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:48:33.309569    1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:48:36.501119    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:48:36.508719    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:48:33.309569    1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:48:36.501119    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:48:36.508719    1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 17:52:36.533211   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 17:52:36.965137   22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:52:36.980167   22364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:52:36.990551   22364 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:52:36.990595   22364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 17:52:37.048475   22364 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 17:52:37.048535   22364 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:52:37.249481   22364 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:52:37.249596   22364 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:52:37.249742   22364 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:52:37.413294   22364 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:52:37.414225   22364 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:52:37.414294   22364 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:52:37.553117   22364 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:52:37.555853   22364 out.go:204]   - Generating certificates and keys ...
	I0229 17:52:37.555963   22364 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:52:37.556045   22364 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:52:37.556263   22364 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 17:52:37.556849   22364 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 17:52:37.557793   22364 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 17:52:37.558369   22364 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 17:52:37.559159   22364 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 17:52:37.559606   22364 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 17:52:37.560121   22364 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 17:52:37.560477   22364 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 17:52:37.560633   22364 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 17:52:37.560688   22364 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:52:37.707855   22364 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:52:37.845135   22364 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:52:37.936691   22364 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:52:38.087992   22364 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:52:38.088782   22364 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:52:38.090680   22364 out.go:204]   - Booting up control plane ...
	I0229 17:52:38.090788   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:52:38.095162   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:52:38.096224   22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:52:38.096961   22364 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:52:38.100186   22364 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:53:18.102413   22364 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 17:53:18.103172   22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:53:18.103339   22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:53:23.103926   22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:53:23.104117   22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:53:33.104802   22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:53:33.104995   22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:53:53.106246   22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:53:53.106736   22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:54:33.108552   22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 17:54:33.108793   22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 17:54:33.108804   22364 kubeadm.go:322] 
	I0229 17:54:33.108851   22364 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 17:54:33.108922   22364 kubeadm.go:322] 		timed out waiting for the condition
	I0229 17:54:33.108932   22364 kubeadm.go:322] 
	I0229 17:54:33.108977   22364 kubeadm.go:322] 	This error is likely caused by:
	I0229 17:54:33.109050   22364 kubeadm.go:322] 		- The kubelet is not running
	I0229 17:54:33.109194   22364 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 17:54:33.109207   22364 kubeadm.go:322] 
	I0229 17:54:33.109326   22364 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 17:54:33.109374   22364 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 17:54:33.109434   22364 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 17:54:33.109444   22364 kubeadm.go:322] 
	I0229 17:54:33.109559   22364 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 17:54:33.109675   22364 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 17:54:33.109690   22364 kubeadm.go:322] 
	I0229 17:54:33.109768   22364 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 17:54:33.109820   22364 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 17:54:33.109936   22364 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 17:54:33.109978   22364 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 17:54:33.109989   22364 kubeadm.go:322] 
	I0229 17:54:33.110803   22364 kubeadm.go:322] W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 17:54:33.110993   22364 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 17:54:33.111182   22364 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 17:54:33.111338   22364 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:54:33.111468   22364 kubeadm.go:322] W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:54:33.111621   22364 kubeadm.go:322] W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 17:54:33.111738   22364 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 17:54:33.111832   22364 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 17:54:33.111921   22364 kubeadm.go:406] StartCluster complete in 5m59.88729152s
	I0229 17:54:33.112013   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 17:54:33.135201   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.135249   22364 logs.go:278] No container was found matching "kube-apiserver"
	I0229 17:54:33.135300   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 17:54:33.152819   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.152850   22364 logs.go:278] No container was found matching "etcd"
	I0229 17:54:33.152909   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 17:54:33.169889   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.169916   22364 logs.go:278] No container was found matching "coredns"
	I0229 17:54:33.169968   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 17:54:33.188070   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.188098   22364 logs.go:278] No container was found matching "kube-scheduler"
	I0229 17:54:33.188157   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 17:54:33.205788   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.205815   22364 logs.go:278] No container was found matching "kube-proxy"
	I0229 17:54:33.205873   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 17:54:33.232848   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.232884   22364 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 17:54:33.232945   22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 17:54:33.254887   22364 logs.go:276] 0 containers: []
	W0229 17:54:33.254914   22364 logs.go:278] No container was found matching "kindnet"
	I0229 17:54:33.254927   22364 logs.go:123] Gathering logs for Docker ...
	I0229 17:54:33.254941   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 17:54:33.304386   22364 logs.go:123] Gathering logs for container status ...
	I0229 17:54:33.304421   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 17:54:33.400088   22364 logs.go:123] Gathering logs for kubelet ...
	I0229 17:54:33.400119   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 17:54:33.430387   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892   51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.436594   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541   51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.442827   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912   51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.449017   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:28 ingress-addon-legacy-924574 kubelet[51910]: F0229 17:54:28.591907   51910 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.455198   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:30 ingress-addon-legacy-924574 kubelet[52092]: F0229 17:54:30.025996   52092 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.461424   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:31 ingress-addon-legacy-924574 kubelet[52276]: F0229 17:54:31.268886   52276 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 17:54:33.467607   22364 logs.go:138] Found kubelet problem: Feb 29 17:54:32 ingress-addon-legacy-924574 kubelet[52461]: F0229 17:54:32.515121   52461 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:54:33.468453   22364 logs.go:123] Gathering logs for dmesg ...
	I0229 17:54:33.468471   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 17:54:33.485109   22364 logs.go:123] Gathering logs for describe nodes ...
	I0229 17:54:33.485133   22364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 17:54:33.550239   22364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0229 17:54:33.550270   22364 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 17:54:33.550314   22364 out.go:239] * 
	* 
	W0229 17:54:33.550368   22364 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:54:33.550396   22364 out.go:239] * 
	* 
	W0229 17:54:33.551451   22364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 17:54:33.553636   22364 out.go:177] X Problems detected in kubelet:
	I0229 17:54:33.554849   22364 out.go:177]   Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892   51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:54:33.556254   22364 out.go:177]   Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541   51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:54:33.557835   22364 out.go:177]   Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912   51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 17:54:33.560652   22364 out.go:177] 
	W0229 17:54:33.562037   22364 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 17:52:37.032934   36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 17:52:38.080047   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 17:52:38.081127   36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 17:54:33.562091   22364 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 17:54:33.562118   22364 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 17:54:33.563828   22364 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (405.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (91.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-924574 addons enable ingress --alsologtostderr -v=5
E0229 17:55:09.379098   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:56:00.470124   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-924574 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m31.571276266s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:54:33.687013   23488 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:54:33.687157   23488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:54:33.687166   23488 out.go:304] Setting ErrFile to fd 2...
	I0229 17:54:33.687170   23488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:54:33.687369   23488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:54:33.687619   23488 mustload.go:65] Loading cluster: ingress-addon-legacy-924574
	I0229 17:54:33.687975   23488 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:54:33.687994   23488 addons.go:597] checking whether the cluster is paused
	I0229 17:54:33.688074   23488 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:54:33.688086   23488 host.go:66] Checking if "ingress-addon-legacy-924574" exists ...
	I0229 17:54:33.688424   23488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:54:33.688460   23488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:54:33.702942   23488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0229 17:54:33.703351   23488 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:54:33.703870   23488 main.go:141] libmachine: Using API Version  1
	I0229 17:54:33.703893   23488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:54:33.704350   23488 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:54:33.704551   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
	I0229 17:54:33.706095   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:54:33.706314   23488 ssh_runner.go:195] Run: systemctl --version
	I0229 17:54:33.706338   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:54:33.708516   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:54:33.708896   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:54:33.708925   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:54:33.709069   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:54:33.709214   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:54:33.709378   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:54:33.709487   23488 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:54:33.791941   23488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 17:54:33.829365   23488 main.go:141] libmachine: Making call to close driver server
	I0229 17:54:33.829399   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Close
	I0229 17:54:33.829684   23488 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:54:33.829703   23488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:54:33.832153   23488 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 17:54:33.833830   23488 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:54:33.833852   23488 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-924574"
	I0229 17:54:33.833863   23488 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-924574"
	I0229 17:54:33.833905   23488 host.go:66] Checking if "ingress-addon-legacy-924574" exists ...
	I0229 17:54:33.834404   23488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:54:33.834457   23488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:54:33.850000   23488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46881
	I0229 17:54:33.850496   23488 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:54:33.851102   23488 main.go:141] libmachine: Using API Version  1
	I0229 17:54:33.851135   23488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:54:33.851538   23488 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:54:33.851995   23488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:54:33.852033   23488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:54:33.866985   23488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46559
	I0229 17:54:33.867405   23488 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:54:33.867984   23488 main.go:141] libmachine: Using API Version  1
	I0229 17:54:33.868016   23488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:54:33.868397   23488 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:54:33.868680   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
	I0229 17:54:33.870433   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:54:33.872767   23488 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 17:54:33.874078   23488 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:54:33.875341   23488 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 17:54:33.876779   23488 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:54:33.876801   23488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 17:54:33.876829   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:54:33.880082   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:54:33.880485   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:54:33.880508   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:54:33.880783   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:54:33.880988   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:54:33.881120   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:54:33.881234   23488 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:54:33.980981   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:34.052402   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:34.052445   23488 retry.go:31] will retry after 283.802643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:34.337097   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:34.410536   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:34.410564   23488 retry.go:31] will retry after 361.040642ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:34.772152   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:34.893278   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:34.893342   23488 retry.go:31] will retry after 706.542976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:35.600276   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:35.663447   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:35.663484   23488 retry.go:31] will retry after 1.247169171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:36.911824   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:36.973890   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:36.973922   23488 retry.go:31] will retry after 1.766701479s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:38.741058   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:38.805302   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:38.805326   23488 retry.go:31] will retry after 2.233114106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:41.039861   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:41.155121   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:41.155163   23488 retry.go:31] will retry after 1.975452426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:43.132421   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:43.202756   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:43.202788   23488 retry.go:31] will retry after 5.836264649s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:49.039757   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:49.103613   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:49.103653   23488 retry.go:31] will retry after 6.821252317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:55.926991   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:54:55.990546   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:54:55.990574   23488 retry.go:31] will retry after 7.411737588s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:03.406306   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:55:03.476964   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:03.477000   23488 retry.go:31] will retry after 7.846211286s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:11.325324   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:55:11.387420   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:11.387456   23488 retry.go:31] will retry after 16.491502564s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:27.882913   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:55:27.947435   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:55:27.947472   23488 retry.go:31] will retry after 37.177547375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:05.125507   23488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 17:56:05.188411   23488 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:05.188468   23488 main.go:141] libmachine: Making call to close driver server
	I0229 17:56:05.188480   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Close
	I0229 17:56:05.188802   23488 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:56:05.188821   23488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:56:05.188832   23488 main.go:141] libmachine: Making call to close driver server
	I0229 17:56:05.188841   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Close
	I0229 17:56:05.190006   23488 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:56:05.190077   23488 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Closing plugin on server side
	I0229 17:56:05.190112   23488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:56:05.190134   23488 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-924574"
	I0229 17:56:05.192221   23488 out.go:177] * Verifying ingress addon...
	I0229 17:56:05.194602   23488 out.go:177] 
	W0229 17:56:05.196142   23488 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-924574" does not exist: client config: context "ingress-addon-legacy-924574" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-924574" does not exist: client config: context "ingress-addon-legacy-924574" does not exist]
	W0229 17:56:05.196157   23488 out.go:239] * 
	* 
	W0229 17:56:05.198172   23488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 17:56:05.199660   23488 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-924574 -n ingress-addon-legacy-924574
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-924574 -n ingress-addon-legacy-924574: exit status 6 (221.303113ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 17:56:05.410890   23730 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-924574" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-924574" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (91.79s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-924574 addons enable ingress-dns --alsologtostderr -v=5
E0229 17:56:28.158651   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-924574 addons enable ingress-dns --alsologtostderr -v=5: signal: killed (1m42.933702512s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:56:05.478965   23763 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:56:05.479159   23763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:56:05.479172   23763 out.go:304] Setting ErrFile to fd 2...
	I0229 17:56:05.479179   23763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:56:05.479402   23763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:56:05.479734   23763 mustload.go:65] Loading cluster: ingress-addon-legacy-924574
	I0229 17:56:05.480100   23763 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:56:05.480127   23763 addons.go:597] checking whether the cluster is paused
	I0229 17:56:05.480228   23763 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:56:05.480245   23763 host.go:66] Checking if "ingress-addon-legacy-924574" exists ...
	I0229 17:56:05.480608   23763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:56:05.480669   23763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:56:05.494900   23763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0229 17:56:05.495299   23763 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:56:05.495846   23763 main.go:141] libmachine: Using API Version  1
	I0229 17:56:05.495869   23763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:56:05.496218   23763 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:56:05.496401   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
	I0229 17:56:05.497807   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:56:05.498015   23763 ssh_runner.go:195] Run: systemctl --version
	I0229 17:56:05.498039   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:56:05.499995   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:56:05.500379   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:56:05.500402   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:56:05.500545   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:56:05.500680   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:56:05.500828   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:56:05.500950   23763 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:56:05.574201   23763 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 17:56:05.595243   23763 main.go:141] libmachine: Making call to close driver server
	I0229 17:56:05.595273   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Close
	I0229 17:56:05.595560   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Closing plugin on server side
	I0229 17:56:05.595603   23763 main.go:141] libmachine: Successfully made call to close driver server
	I0229 17:56:05.595617   23763 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 17:56:05.598102   23763 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 17:56:05.599618   23763 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 17:56:05.599633   23763 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-924574"
	I0229 17:56:05.599653   23763 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-924574"
	I0229 17:56:05.599690   23763 host.go:66] Checking if "ingress-addon-legacy-924574" exists ...
	I0229 17:56:05.599966   23763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:56:05.600011   23763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:56:05.614044   23763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I0229 17:56:05.614430   23763 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:56:05.614903   23763 main.go:141] libmachine: Using API Version  1
	I0229 17:56:05.614923   23763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:56:05.615286   23763 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:56:05.615772   23763 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:56:05.615834   23763 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:56:05.629556   23763 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44879
	I0229 17:56:05.629899   23763 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:56:05.630280   23763 main.go:141] libmachine: Using API Version  1
	I0229 17:56:05.630297   23763 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:56:05.630596   23763 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:56:05.630831   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
	I0229 17:56:05.632348   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
	I0229 17:56:05.634404   23763 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 17:56:05.635989   23763 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:56:05.636005   23763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 17:56:05.636018   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
	I0229 17:56:05.638669   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:56:05.639047   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
	I0229 17:56:05.639065   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
	I0229 17:56:05.639230   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
	I0229 17:56:05.639391   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
	I0229 17:56:05.639529   23763 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
	I0229 17:56:05.639662   23763 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
	I0229 17:56:05.733693   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:05.828923   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:05.828957   23763 retry.go:31] will retry after 169.392816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:05.999458   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:06.068034   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:06.068061   23763 retry.go:31] will retry after 269.557268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:06.338616   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:06.399941   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:06.399977   23763 retry.go:31] will retry after 571.38362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:06.971741   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:07.053369   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:07.053404   23763 retry.go:31] will retry after 582.328945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:07.636184   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:07.698584   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:07.698615   23763 retry.go:31] will retry after 1.394848665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:09.094240   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:09.159908   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:09.159935   23763 retry.go:31] will retry after 2.39135277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:11.551790   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:11.617106   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:11.617140   23763 retry.go:31] will retry after 1.619389353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:13.238037   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:13.344186   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:13.344237   23763 retry.go:31] will retry after 4.358043901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:17.703684   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:17.776106   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:17.776138   23763 retry.go:31] will retry after 4.190432026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:21.967515   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:22.052071   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:22.052119   23763 retry.go:31] will retry after 10.522542304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:32.576076   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:32.637744   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:32.637786   23763 retry.go:31] will retry after 19.884949465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:52.523824   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:56:52.608947   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:56:52.608981   23763 retry.go:31] will retry after 27.458772208s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:20.071766   23763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 17:57:20.137687   23763 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 17:57:20.137750   23763 retry.go:31] will retry after 35.109020889s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-924574 -n ingress-addon-legacy-924574
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-924574 -n ingress-addon-legacy-924574: exit status 6 (244.098527ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 17:57:48.587278   24021 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-924574" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-924574" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (416.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : exit status 109 (5m25.678593308s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-235196] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-235196 in cluster kubernetes-upgrade-235196
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:21:59.785230   37379 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:21:59.785405   37379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:21:59.785418   37379 out.go:304] Setting ErrFile to fd 2...
	I0229 18:21:59.785424   37379 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:21:59.785691   37379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:21:59.786485   37379 out.go:298] Setting JSON to false
	I0229 18:21:59.787819   37379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3870,"bootTime":1709227050,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:21:59.787909   37379 start.go:139] virtualization: kvm guest
	I0229 18:21:59.790461   37379 out.go:177] * [kubernetes-upgrade-235196] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:21:59.792170   37379 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:21:59.793318   37379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:21:59.792241   37379 notify.go:220] Checking for updates...
	I0229 18:21:59.795827   37379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:21:59.797061   37379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:21:59.798374   37379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:21:59.799621   37379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:21:59.801320   37379 config.go:182] Loaded profile config "NoKubernetes-960195": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:21:59.801442   37379 config.go:182] Loaded profile config "pause-398168": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:21:59.801543   37379 config.go:182] Loaded profile config "running-upgrade-799804": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 18:21:59.801652   37379 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:21:59.841899   37379 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:21:59.843119   37379 start.go:299] selected driver: kvm2
	I0229 18:21:59.843133   37379 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:21:59.843148   37379 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:21:59.844066   37379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:21:59.844180   37379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:21:59.865280   37379 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:21:59.865344   37379 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:21:59.865626   37379 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 18:21:59.865708   37379 cni.go:84] Creating CNI manager for ""
	I0229 18:21:59.865737   37379 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:21:59.865754   37379 start_flags.go:323] config:
	{Name:kubernetes-upgrade-235196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:21:59.865981   37379 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:21:59.867911   37379 out.go:177] * Starting control plane node kubernetes-upgrade-235196 in cluster kubernetes-upgrade-235196
	I0229 18:21:59.869452   37379 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:21:59.869505   37379 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 18:21:59.869519   37379 cache.go:56] Caching tarball of preloaded images
	I0229 18:21:59.869609   37379 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:21:59.869625   37379 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 18:21:59.869742   37379 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/config.json ...
	I0229 18:21:59.869775   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/config.json: {Name:mk956e376678acad9cfb718c2280bfcb4f77fe77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:21:59.869942   37379 start.go:365] acquiring machines lock for kubernetes-upgrade-235196: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:22:51.922437   37379 start.go:369] acquired machines lock for "kubernetes-upgrade-235196" in 52.052447489s
	I0229 18:22:51.922503   37379 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-235196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:22:51.922663   37379 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:22:51.924644   37379 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:22:51.924858   37379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:22:51.924901   37379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:22:51.943019   37379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I0229 18:22:51.943609   37379 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:22:51.944367   37379 main.go:141] libmachine: Using API Version  1
	I0229 18:22:51.944389   37379 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:22:51.944767   37379 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:22:51.945114   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetMachineName
	I0229 18:22:51.945224   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:22:51.945403   37379 start.go:159] libmachine.API.Create for "kubernetes-upgrade-235196" (driver="kvm2")
	I0229 18:22:51.945438   37379 client.go:168] LocalClient.Create starting
	I0229 18:22:51.945535   37379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem
	I0229 18:22:51.945585   37379 main.go:141] libmachine: Decoding PEM data...
	I0229 18:22:51.945608   37379 main.go:141] libmachine: Parsing certificate...
	I0229 18:22:51.945673   37379 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem
	I0229 18:22:51.945715   37379 main.go:141] libmachine: Decoding PEM data...
	I0229 18:22:51.945735   37379 main.go:141] libmachine: Parsing certificate...
	I0229 18:22:51.945764   37379 main.go:141] libmachine: Running pre-create checks...
	I0229 18:22:51.945777   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .PreCreateCheck
	I0229 18:22:51.946185   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetConfigRaw
	I0229 18:22:51.946676   37379 main.go:141] libmachine: Creating machine...
	I0229 18:22:51.946694   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Create
	I0229 18:22:51.946855   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Creating KVM machine...
	I0229 18:22:51.948276   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found existing default KVM network
	I0229 18:22:51.949904   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:51.949721   37854 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:e5:5e} reservation:<nil>}
	I0229 18:22:51.951222   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:51.951120   37854 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:ce:8f} reservation:<nil>}
	I0229 18:22:51.952165   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:51.952074   37854 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:d5:37} reservation:<nil>}
	I0229 18:22:51.953546   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:51.953467   37854 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003c4610}
	I0229 18:22:51.959819   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | trying to create private KVM network mk-kubernetes-upgrade-235196 192.168.72.0/24...
	I0229 18:22:52.054799   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | private KVM network mk-kubernetes-upgrade-235196 192.168.72.0/24 created
	I0229 18:22:52.054855   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:52.054787   37854 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:22:52.054908   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting up store path in /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196 ...
	I0229 18:22:52.054946   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Building disk image from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:22:52.055017   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Downloading /home/jenkins/minikube-integration/18259-6402/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:22:52.300466   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:52.300334   37854 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa...
	I0229 18:22:52.463398   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:52.463232   37854 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/kubernetes-upgrade-235196.rawdisk...
	I0229 18:22:52.463436   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Writing magic tar header
	I0229 18:22:52.463488   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Writing SSH key tar header
	I0229 18:22:52.463529   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:52.463387   37854 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196 ...
	I0229 18:22:52.463572   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196 (perms=drwx------)
	I0229 18:22:52.463602   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:22:52.463626   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196
	I0229 18:22:52.463658   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube (perms=drwxr-xr-x)
	I0229 18:22:52.463671   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines
	I0229 18:22:52.463683   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402 (perms=drwxrwxr-x)
	I0229 18:22:52.463701   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:22:52.463712   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:22:52.463727   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Creating domain...
	I0229 18:22:52.463747   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:22:52.463763   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402
	I0229 18:22:52.463786   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:22:52.463800   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:22:52.463814   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Checking permissions on dir: /home
	I0229 18:22:52.463830   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Skipping /home - not owner
	I0229 18:22:52.464992   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) define libvirt domain using xml: 
	I0229 18:22:52.465011   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) <domain type='kvm'>
	I0229 18:22:52.465021   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <name>kubernetes-upgrade-235196</name>
	I0229 18:22:52.465029   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <memory unit='MiB'>2200</memory>
	I0229 18:22:52.465049   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <vcpu>2</vcpu>
	I0229 18:22:52.465071   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <features>
	I0229 18:22:52.465090   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <acpi/>
	I0229 18:22:52.465106   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <apic/>
	I0229 18:22:52.465118   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <pae/>
	I0229 18:22:52.465127   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     
	I0229 18:22:52.465146   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   </features>
	I0229 18:22:52.465159   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <cpu mode='host-passthrough'>
	I0229 18:22:52.465172   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   
	I0229 18:22:52.465183   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   </cpu>
	I0229 18:22:52.465195   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <os>
	I0229 18:22:52.465204   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <type>hvm</type>
	I0229 18:22:52.465216   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <boot dev='cdrom'/>
	I0229 18:22:52.465228   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <boot dev='hd'/>
	I0229 18:22:52.465240   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <bootmenu enable='no'/>
	I0229 18:22:52.465251   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   </os>
	I0229 18:22:52.465264   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   <devices>
	I0229 18:22:52.465277   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <disk type='file' device='cdrom'>
	I0229 18:22:52.465295   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/boot2docker.iso'/>
	I0229 18:22:52.465308   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <target dev='hdc' bus='scsi'/>
	I0229 18:22:52.465320   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <readonly/>
	I0229 18:22:52.465331   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </disk>
	I0229 18:22:52.465344   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <disk type='file' device='disk'>
	I0229 18:22:52.465354   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:22:52.465372   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/kubernetes-upgrade-235196.rawdisk'/>
	I0229 18:22:52.465387   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <target dev='hda' bus='virtio'/>
	I0229 18:22:52.465399   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </disk>
	I0229 18:22:52.465408   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <interface type='network'>
	I0229 18:22:52.465422   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <source network='mk-kubernetes-upgrade-235196'/>
	I0229 18:22:52.465433   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <model type='virtio'/>
	I0229 18:22:52.465443   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </interface>
	I0229 18:22:52.465454   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <interface type='network'>
	I0229 18:22:52.465468   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <source network='default'/>
	I0229 18:22:52.465479   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <model type='virtio'/>
	I0229 18:22:52.465490   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </interface>
	I0229 18:22:52.465501   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <serial type='pty'>
	I0229 18:22:52.465516   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <target port='0'/>
	I0229 18:22:52.465526   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </serial>
	I0229 18:22:52.465536   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <console type='pty'>
	I0229 18:22:52.465548   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <target type='serial' port='0'/>
	I0229 18:22:52.465560   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </console>
	I0229 18:22:52.465572   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     <rng model='virtio'>
	I0229 18:22:52.465593   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)       <backend model='random'>/dev/random</backend>
	I0229 18:22:52.465603   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     </rng>
	I0229 18:22:52.465615   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     
	I0229 18:22:52.465622   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)     
	I0229 18:22:52.465634   37379 main.go:141] libmachine: (kubernetes-upgrade-235196)   </devices>
	I0229 18:22:52.465645   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) </domain>
	I0229 18:22:52.465658   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) 
	I0229 18:22:52.470266   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:d4:9c:e3 in network default
	I0229 18:22:52.470992   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Ensuring networks are active...
	I0229 18:22:52.471039   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:52.471939   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Ensuring network default is active
	I0229 18:22:52.472477   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Ensuring network mk-kubernetes-upgrade-235196 is active
	I0229 18:22:52.473203   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Getting domain xml...
	I0229 18:22:52.474049   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Creating domain...
	I0229 18:22:53.800813   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Waiting to get IP...
	I0229 18:22:53.801679   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:53.802150   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:53.802185   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:53.802102   37854 retry.go:31] will retry after 272.070756ms: waiting for machine to come up
	I0229 18:22:54.075528   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.076000   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.076031   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:54.075964   37854 retry.go:31] will retry after 280.093127ms: waiting for machine to come up
	I0229 18:22:54.357642   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.358440   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.358464   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:54.358397   37854 retry.go:31] will retry after 335.848647ms: waiting for machine to come up
	I0229 18:22:54.695949   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.696555   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:54.696597   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:54.696472   37854 retry.go:31] will retry after 561.411313ms: waiting for machine to come up
	I0229 18:22:55.260142   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:55.260615   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:55.260641   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:55.260561   37854 retry.go:31] will retry after 758.536513ms: waiting for machine to come up
	I0229 18:22:56.021193   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:56.021942   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:56.021969   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:56.021883   37854 retry.go:31] will retry after 757.042783ms: waiting for machine to come up
	I0229 18:22:56.780902   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:56.781522   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:56.781552   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:56.781475   37854 retry.go:31] will retry after 1.116111126s: waiting for machine to come up
	I0229 18:22:57.899021   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:57.899491   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:57.899511   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:57.899444   37854 retry.go:31] will retry after 923.531219ms: waiting for machine to come up
	I0229 18:22:58.824208   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:22:58.824674   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:22:58.824697   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:22:58.824599   37854 retry.go:31] will retry after 1.472123096s: waiting for machine to come up
	I0229 18:23:00.298667   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:00.299187   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:23:00.299217   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:23:00.299129   37854 retry.go:31] will retry after 1.855705898s: waiting for machine to come up
	I0229 18:23:02.156679   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:02.157176   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:23:02.157200   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:23:02.157107   37854 retry.go:31] will retry after 2.841397554s: waiting for machine to come up
	I0229 18:23:05.000935   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:05.001433   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:23:05.001466   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:23:05.001380   37854 retry.go:31] will retry after 2.936165008s: waiting for machine to come up
	I0229 18:23:07.939125   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:07.939585   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:23:07.939614   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:23:07.939526   37854 retry.go:31] will retry after 2.819431943s: waiting for machine to come up
	I0229 18:23:10.761839   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:10.762366   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find current IP address of domain kubernetes-upgrade-235196 in network mk-kubernetes-upgrade-235196
	I0229 18:23:10.762396   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | I0229 18:23:10.762318   37854 retry.go:31] will retry after 4.600198323s: waiting for machine to come up
	I0229 18:23:15.364858   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.365397   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Found IP for machine: 192.168.72.169
	I0229 18:23:15.365425   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Reserving static IP address...
	I0229 18:23:15.365443   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has current primary IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.365771   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-235196", mac: "52:54:00:85:57:33", ip: "192.168.72.169"} in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.451878   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Getting to WaitForSSH function...
	I0229 18:23:15.451904   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Reserved static IP address: 192.168.72.169
	I0229 18:23:15.451918   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Waiting for SSH to be available...
	I0229 18:23:15.454753   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.455358   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:57:33}
	I0229 18:23:15.455399   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.455695   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Using SSH client type: external
	I0229 18:23:15.455733   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa (-rw-------)
	I0229 18:23:15.455780   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:23:15.455808   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | About to run SSH command:
	I0229 18:23:15.455822   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | exit 0
	I0229 18:23:15.584371   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | SSH cmd err, output: <nil>: 
	I0229 18:23:15.584719   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) KVM machine creation complete!
	I0229 18:23:15.585102   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetConfigRaw
	I0229 18:23:15.585848   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:15.586067   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:15.586260   37379 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:23:15.586275   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetState
	I0229 18:23:15.587842   37379 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:23:15.587860   37379 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:23:15.587868   37379 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:23:15.587878   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:15.590499   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.591000   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:15.591031   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.591169   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:15.591368   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.591583   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.591770   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:15.591981   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:15.592227   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:15.592244   37379 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:23:15.700181   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:23:15.700209   37379 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:23:15.700227   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:15.703203   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.703615   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:15.703663   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.703885   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:15.704127   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.704281   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.704411   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:15.704601   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:15.704808   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:15.704820   37379 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:23:15.816954   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:23:15.817036   37379 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:23:15.817048   37379 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:23:15.817061   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetMachineName
	I0229 18:23:15.817323   37379 buildroot.go:166] provisioning hostname "kubernetes-upgrade-235196"
	I0229 18:23:15.817349   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetMachineName
	I0229 18:23:15.817581   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:15.820638   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.821009   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:15.821043   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.821355   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:15.821554   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.821767   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.821932   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:15.822128   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:15.822304   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:15.822318   37379 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-235196 && echo "kubernetes-upgrade-235196" | sudo tee /etc/hostname
	I0229 18:23:15.948202   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-235196
	
	I0229 18:23:15.948236   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:15.951749   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.952168   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:15.952213   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:15.952409   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:15.952602   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.952781   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:15.952953   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:15.953158   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:15.953346   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:15.953363   37379 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-235196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-235196/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-235196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:23:16.070218   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:23:16.070253   37379 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:23:16.070275   37379 buildroot.go:174] setting up certificates
	I0229 18:23:16.070286   37379 provision.go:83] configureAuth start
	I0229 18:23:16.070299   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetMachineName
	I0229 18:23:16.070593   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetIP
	I0229 18:23:16.074115   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.074547   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.074580   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.074762   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:16.077506   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.077900   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.077929   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.078081   37379 provision.go:138] copyHostCerts
	I0229 18:23:16.078142   37379 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:23:16.078158   37379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:23:16.078209   37379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:23:16.078288   37379 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:23:16.078296   37379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:23:16.078315   37379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:23:16.078367   37379 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:23:16.078374   37379 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:23:16.078391   37379 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:23:16.078430   37379 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-235196 san=[192.168.72.169 192.168.72.169 localhost 127.0.0.1 minikube kubernetes-upgrade-235196]
	I0229 18:23:16.308781   37379 provision.go:172] copyRemoteCerts
	I0229 18:23:16.308848   37379 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:23:16.308875   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:16.311994   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.312434   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.312475   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.312659   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:16.312860   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.313037   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:16.313185   37379 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:23:16.403589   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:23:16.437344   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 18:23:16.473016   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:23:16.507713   37379 provision.go:86] duration metric: configureAuth took 437.411897ms
	I0229 18:23:16.507745   37379 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:23:16.507942   37379 config.go:182] Loaded profile config "kubernetes-upgrade-235196": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:23:16.507974   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:16.508245   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:16.511075   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.511454   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.511495   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.511625   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:16.511879   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.512048   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.512215   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:16.512409   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:16.512621   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:16.512632   37379 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:23:16.618001   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:23:16.618030   37379 buildroot.go:70] root file system type: tmpfs
	I0229 18:23:16.618184   37379 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:23:16.618211   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:16.621385   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.621795   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.621834   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.621956   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:16.622162   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.622356   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.622527   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:16.622740   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:16.622950   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:16.623054   37379 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:23:16.749311   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:23:16.749351   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:16.752444   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.752849   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:16.752880   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:16.753051   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:16.753348   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.753537   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:16.753693   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:16.753910   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:16.754125   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:16.754152   37379 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:23:17.720508   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:23:17.720542   37379 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:23:17.720556   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetURL
	I0229 18:23:17.721815   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Using libvirt version 6000000
	I0229 18:23:17.724012   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.724293   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.724321   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.724502   37379 main.go:141] libmachine: Docker is up and running!
	I0229 18:23:17.724531   37379 main.go:141] libmachine: Reticulating splines...
	I0229 18:23:17.724537   37379 client.go:171] LocalClient.Create took 25.779089202s
	I0229 18:23:17.724563   37379 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-235196" took 25.779161974s
	I0229 18:23:17.724577   37379 start.go:300] post-start starting for "kubernetes-upgrade-235196" (driver="kvm2")
	I0229 18:23:17.724592   37379 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:23:17.724620   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:17.724877   37379 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:23:17.724917   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:17.727301   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.727631   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.727688   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.727811   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:17.728013   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:17.728198   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:17.728361   37379 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:23:17.812767   37379 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:23:17.818045   37379 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:23:17.818077   37379 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:23:17.818144   37379 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:23:17.818241   37379 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:23:17.818333   37379 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:23:17.828443   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:23:17.854273   37379 start.go:303] post-start completed in 129.680619ms
	I0229 18:23:17.854360   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetConfigRaw
	I0229 18:23:17.855072   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetIP
	I0229 18:23:17.857831   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.858240   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.858266   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.858469   37379 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/config.json ...
	I0229 18:23:17.858712   37379 start.go:128] duration metric: createHost completed in 25.936036342s
	I0229 18:23:17.858735   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:17.861138   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.861461   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.861490   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.861603   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:17.861777   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:17.861936   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:17.862068   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:17.862253   37379 main.go:141] libmachine: Using SSH client type: native
	I0229 18:23:17.862401   37379 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.169 22 <nil> <nil>}
	I0229 18:23:17.862411   37379 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:23:17.968647   37379 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709230997.909236723
	
	I0229 18:23:17.968669   37379 fix.go:206] guest clock: 1709230997.909236723
	I0229 18:23:17.968676   37379 fix.go:219] Guest: 2024-02-29 18:23:17.909236723 +0000 UTC Remote: 2024-02-29 18:23:17.858724399 +0000 UTC m=+78.136133731 (delta=50.512324ms)
	I0229 18:23:17.968694   37379 fix.go:190] guest clock delta is within tolerance: 50.512324ms
	I0229 18:23:17.968699   37379 start.go:83] releasing machines lock for "kubernetes-upgrade-235196", held for 26.046230285s
	I0229 18:23:17.968726   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:17.969013   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetIP
	I0229 18:23:17.971596   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.971940   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.971980   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.972102   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:17.972682   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:17.972899   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:23:17.972989   37379 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:23:17.973019   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:17.973155   37379 ssh_runner.go:195] Run: cat /version.json
	I0229 18:23:17.973193   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:23:17.975969   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.976284   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.976362   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.976385   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.976521   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:17.976687   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:17.976778   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:17.976806   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:17.976847   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:17.976944   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:23:17.977029   37379 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:23:17.977103   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:23:17.977248   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:23:17.977388   37379 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:23:18.087588   37379 ssh_runner.go:195] Run: systemctl --version
	I0229 18:23:18.094034   37379 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:23:18.100323   37379 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:23:18.100377   37379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:23:18.114014   37379 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:23:18.139299   37379 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:23:18.139322   37379 start.go:475] detecting cgroup driver to use...
	I0229 18:23:18.139459   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:23:18.172946   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:23:18.185735   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:23:18.197780   37379 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:23:18.197835   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:23:18.210828   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:23:18.224547   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:23:18.237911   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:23:18.252103   37379 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:23:18.268436   37379 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:23:18.283677   37379 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:23:18.298210   37379 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:23:18.312127   37379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:23:18.453632   37379 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:23:18.479852   37379 start.go:475] detecting cgroup driver to use...
	I0229 18:23:18.479995   37379 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:23:18.501077   37379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:23:18.520812   37379 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:23:18.546455   37379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:23:18.562325   37379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:23:18.579992   37379 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:23:18.616728   37379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:23:18.633112   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:23:18.655482   37379 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:23:18.660308   37379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:23:18.671000   37379 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:23:18.690108   37379 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:23:18.816903   37379 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:23:18.985353   37379 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:23:18.985509   37379 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:23:19.007528   37379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:23:19.143936   37379 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:23:20.615693   37379 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.471720188s)
	I0229 18:23:20.615766   37379 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:23:20.649170   37379 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:23:20.680325   37379 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 18:23:20.680375   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetIP
	I0229 18:23:20.683677   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:20.684257   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:23:08 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:23:20.684299   37379 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:23:20.684555   37379 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:23:20.693059   37379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:23:20.708806   37379 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:23:20.708876   37379 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:23:20.733882   37379 docker.go:685] Got preloaded images: 
	I0229 18:23:20.733907   37379 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:23:20.733950   37379 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:23:20.750632   37379 ssh_runner.go:195] Run: which lz4
	I0229 18:23:20.755545   37379 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:23:20.762109   37379 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:23:20.762143   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 18:23:22.509663   37379 docker.go:649] Took 1.754161 seconds to copy over tarball
	I0229 18:23:22.509737   37379 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:23:24.868106   37379 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.358344947s)
	I0229 18:23:24.868134   37379 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:23:24.915070   37379 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:23:24.929632   37379 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 18:23:24.953127   37379 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:23:25.086839   37379 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:23:27.549695   37379 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.462818242s)
	I0229 18:23:27.549816   37379 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:23:27.572950   37379 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:23:27.572974   37379 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:23:27.572986   37379 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:23:27.575536   37379 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:23:27.575536   37379 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:23:27.575542   37379 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:23:27.575569   37379 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:23:27.575592   37379 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:23:27.575539   37379 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:23:27.575541   37379 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:23:27.576055   37379 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:23:27.576822   37379 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:23:27.577111   37379 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:23:27.577111   37379 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:23:27.577118   37379 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:23:27.577134   37379 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:23:27.577126   37379 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:23:27.577170   37379 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:23:27.577192   37379 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:23:27.724413   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:23:27.725229   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:23:27.734052   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:23:27.736485   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:23:27.738451   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:23:27.744081   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:23:27.748331   37379 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:23:27.748377   37379 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:23:27.748413   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:23:27.769430   37379 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:23:27.769485   37379 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:23:27.769532   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:23:27.787989   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:23:27.796524   37379 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:23:27.796573   37379 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:23:27.796617   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 18:23:27.823736   37379 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:23:27.823787   37379 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 18:23:27.823834   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 18:23:27.823934   37379 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:23:27.823962   37379 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:23:27.823989   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:23:27.824084   37379 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:23:27.824107   37379 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:23:27.824134   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:23:27.835622   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:23:27.853373   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:23:27.858587   37379 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:23:27.858634   37379 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:23:27.858677   37379 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:23:27.862271   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:23:27.905018   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:23:27.905113   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:23:27.911996   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:23:27.912037   37379 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:23:28.212578   37379 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:23:28.233021   37379 cache_images.go:92] LoadImages completed in 660.017891ms
	W0229 18:23:28.233090   37379 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0229 18:23:28.233142   37379 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:23:28.263039   37379 cni.go:84] Creating CNI manager for ""
	I0229 18:23:28.263072   37379 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:23:28.263091   37379 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:23:28.263115   37379 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.169 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-235196 NodeName:kubernetes-upgrade-235196 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:23:28.263287   37379 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-235196"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-235196
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.169:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:23:28.263374   37379 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-235196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:23:28.263436   37379 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:23:28.274034   37379 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:23:28.274111   37379 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:23:28.287155   37379 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0229 18:23:28.306903   37379 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:23:28.325049   37379 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2188 bytes)
	I0229 18:23:28.343410   37379 ssh_runner.go:195] Run: grep 192.168.72.169	control-plane.minikube.internal$ /etc/hosts
	I0229 18:23:28.347866   37379 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:23:28.361904   37379 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196 for IP: 192.168.72.169
	I0229 18:23:28.361943   37379 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.362083   37379 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:23:28.362119   37379 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:23:28.362163   37379 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key
	I0229 18:23:28.362175   37379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt with IP's: []
	I0229 18:23:28.473252   37379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt ...
	I0229 18:23:28.473281   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt: {Name:mkc41ca25cc012c65dffa57ccd67de30eb946afe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.473444   37379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key ...
	I0229 18:23:28.473456   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key: {Name:mk130f357df9b039cd5b0aa08734232005f48d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.473530   37379 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key.8ded931d
	I0229 18:23:28.473545   37379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt.8ded931d with IP's: [192.168.72.169 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:23:28.603038   37379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt.8ded931d ...
	I0229 18:23:28.603073   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt.8ded931d: {Name:mk3aab8eae3eb1055be431bcdaf0690ca4a9f3ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.603238   37379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key.8ded931d ...
	I0229 18:23:28.603252   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key.8ded931d: {Name:mk946786c33e94f00ebe34db8ce00c1fa0420b86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.603328   37379 certs.go:337] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt.8ded931d -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt
	I0229 18:23:28.603421   37379 certs.go:341] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key.8ded931d -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key
	I0229 18:23:28.603495   37379 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key
	I0229 18:23:28.603517   37379 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.crt with IP's: []
	I0229 18:23:28.708750   37379 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.crt ...
	I0229 18:23:28.708783   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.crt: {Name:mk13b91c02bf164d4e8685c92d74af629587586f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.708940   37379 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key ...
	I0229 18:23:28.708953   37379 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key: {Name:mk03af44d5059f80b673e9ded9b17fb5cb350a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:23:28.709104   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:23:28.709139   37379 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:23:28.709147   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:23:28.709173   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:23:28.709192   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:23:28.709214   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:23:28.709249   37379 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:23:28.709850   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:23:28.737976   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:23:28.769207   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:23:28.801476   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:23:28.837413   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:23:28.872241   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:23:28.900786   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:23:28.928639   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:23:28.959791   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:23:28.999501   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:23:29.035204   37379 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:23:29.071005   37379 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:23:29.092227   37379 ssh_runner.go:195] Run: openssl version
	I0229 18:23:29.100279   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:23:29.114127   37379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:23:29.120204   37379 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:23:29.120276   37379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:23:29.127353   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:23:29.140326   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:23:29.153461   37379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:23:29.158710   37379 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:23:29.158763   37379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:23:29.165468   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:23:29.176857   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:23:29.189629   37379 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:23:29.195009   37379 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:23:29.195128   37379 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:23:29.201352   37379 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:23:29.216156   37379 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:23:29.221280   37379 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:23:29.221351   37379 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-235196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.16.0 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.169 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:23:29.221455   37379 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:23:29.238903   37379 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:23:29.250016   37379 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:23:29.261086   37379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:23:29.272182   37379 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:23:29.272229   37379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:23:29.396766   37379 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:23:29.397033   37379 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:23:29.697930   37379 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:23:29.698068   37379 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:23:29.698163   37379 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:23:29.869975   37379 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:23:29.871529   37379 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:23:29.880805   37379 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:23:30.007816   37379 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:23:30.009840   37379 out.go:204]   - Generating certificates and keys ...
	I0229 18:23:30.009969   37379 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:23:30.010079   37379 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:23:30.114537   37379 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:23:30.327465   37379 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:23:30.470196   37379 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:23:30.533179   37379 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:23:30.676557   37379 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:23:30.676846   37379 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	I0229 18:23:30.744498   37379 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:23:30.744922   37379 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	I0229 18:23:30.804614   37379 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:23:31.010234   37379 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:23:31.267791   37379 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:23:31.267912   37379 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:23:31.518391   37379 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:23:31.616404   37379 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:23:31.741439   37379 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:23:31.953863   37379 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:23:31.954802   37379 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:23:31.956566   37379 out.go:204]   - Booting up control plane ...
	I0229 18:23:31.956666   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:23:31.962797   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:23:31.964785   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:23:31.966200   37379 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:23:31.968823   37379 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:24:11.915228   37379 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:24:11.916729   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:11.917142   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:16.916497   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:16.916772   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:26.915479   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:26.915724   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:46.914875   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:46.915116   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:25:26.915919   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:25:26.916160   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:25:26.916174   37379 kubeadm.go:322] 
	I0229 18:25:26.916245   37379 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:25:26.916310   37379 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:25:26.916325   37379 kubeadm.go:322] 
	I0229 18:25:26.916369   37379 kubeadm.go:322] This error is likely caused by:
	I0229 18:25:26.916408   37379 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:25:26.916529   37379 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:25:26.916541   37379 kubeadm.go:322] 
	I0229 18:25:26.916655   37379 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:25:26.916701   37379 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:25:26.916740   37379 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:25:26.916749   37379 kubeadm.go:322] 
	I0229 18:25:26.916882   37379 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:25:26.916990   37379 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:25:26.917101   37379 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:25:26.917173   37379 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:25:26.917298   37379 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:25:26.917356   37379 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:25:26.918689   37379 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:25:26.918876   37379 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:25:26.919024   37379 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:25:26.919116   37379 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:25:26.919175   37379 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:25:26.919296   37379 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235196 localhost] and IPs [192.168.72.169 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:25:26.919337   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:25:27.374605   37379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:25:27.389781   37379 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:25:27.404017   37379 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:25:27.404057   37379 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:25:27.471782   37379 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:25:27.471950   37379 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:25:27.680434   37379 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:25:27.680715   37379 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:25:27.680851   37379 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:25:27.849932   37379 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:25:27.850109   37379 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:25:27.858570   37379 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:25:27.987169   37379 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:25:27.989279   37379 out.go:204]   - Generating certificates and keys ...
	I0229 18:25:27.989377   37379 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:25:27.989484   37379 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:25:27.989593   37379 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:25:27.989680   37379 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:25:27.989802   37379 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:25:27.989885   37379 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:25:27.989961   37379 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:25:27.990031   37379 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:25:27.990140   37379 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:25:27.990256   37379 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:25:27.990310   37379 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:25:27.990386   37379 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:25:28.265522   37379 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:25:28.462813   37379 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:25:29.052864   37379 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:25:29.160294   37379 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:25:29.161229   37379 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:25:29.163077   37379 out.go:204]   - Booting up control plane ...
	I0229 18:25:29.163199   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:25:29.167450   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:25:29.168912   37379 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:25:29.169971   37379 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:25:29.173943   37379 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:26:09.175579   37379 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:26:09.176507   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:26:09.176777   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:26:14.177214   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:26:14.177488   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:26:24.177942   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:26:24.178167   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:26:44.178986   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:26:44.179244   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:27:24.181325   37379 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:27:24.181581   37379 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:27:24.181596   37379 kubeadm.go:322] 
	I0229 18:27:24.181660   37379 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:27:24.181823   37379 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:27:24.181844   37379 kubeadm.go:322] 
	I0229 18:27:24.181888   37379 kubeadm.go:322] This error is likely caused by:
	I0229 18:27:24.181931   37379 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:27:24.182088   37379 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:27:24.182102   37379 kubeadm.go:322] 
	I0229 18:27:24.182233   37379 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:27:24.182297   37379 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:27:24.182346   37379 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:27:24.182357   37379 kubeadm.go:322] 
	I0229 18:27:24.182478   37379 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:27:24.182593   37379 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:27:24.182727   37379 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:27:24.182806   37379 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:27:24.182906   37379 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:27:24.182975   37379 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:27:24.184612   37379 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:27:24.184760   37379 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:27:24.184878   37379 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:27:24.184992   37379 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:27:24.185078   37379 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:27:24.185176   37379 kubeadm.go:406] StartCluster complete in 3m54.963835709s
	I0229 18:27:24.185288   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:27:24.214539   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.214569   37379 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:27:24.214630   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:27:24.236706   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.236741   37379 logs.go:278] No container was found matching "etcd"
	I0229 18:27:24.236809   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:27:24.257343   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.257367   37379 logs.go:278] No container was found matching "coredns"
	I0229 18:27:24.257443   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:27:24.276034   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.276064   37379 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:27:24.276123   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:27:24.295268   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.295295   37379 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:27:24.295358   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:27:24.313908   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.313934   37379 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:27:24.313991   37379 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:27:24.333040   37379 logs.go:276] 0 containers: []
	W0229 18:27:24.333078   37379 logs.go:278] No container was found matching "kindnet"
	I0229 18:27:24.333094   37379 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:27:24.333115   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:27:24.423969   37379 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:27:24.423991   37379 logs.go:123] Gathering logs for Docker ...
	I0229 18:27:24.424007   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:27:24.475801   37379 logs.go:123] Gathering logs for container status ...
	I0229 18:27:24.475835   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:27:24.568070   37379 logs.go:123] Gathering logs for kubelet ...
	I0229 18:27:24.568110   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:27:24.626413   37379 logs.go:123] Gathering logs for dmesg ...
	I0229 18:27:24.626455   37379 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0229 18:27:24.641651   37379 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:27:24.641706   37379 out.go:239] * 
	* 
	W0229 18:27:24.641785   37379 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:27:24.641818   37379 out.go:239] * 
	* 
	W0229 18:27:24.642723   37379 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:27:24.770267   37379 out.go:177] 
	W0229 18:27:24.935092   37379 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:27:24.935304   37379 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:27:25.122208   37379 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:27:25.256138   37379 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-235196
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-235196: (2.644584412s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-235196 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-235196 status --format={{.Host}}: exit status 7 (81.67121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (54.042160462s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-235196 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (121.208673ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-235196] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-235196
	    minikube start -p kubernetes-upgrade-235196 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2351962 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-235196 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235196 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2 : (30.256429736s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 18:28:52.688691157 +0000 UTC m=+3103.936672693
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-235196 -n kubernetes-upgrade-235196
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-235196 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-235196 logs -n 25: (1.583788375s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                      Args                      |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-960195 sudo                    | NoKubernetes-960195       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC |                     |
	|         | systemctl is-active --quiet                    |                           |         |         |                     |                     |
	|         | service kubelet                                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-960195                         | NoKubernetes-960195       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:25 UTC |
	| ssh     | docker-flags-296466 ssh                        | docker-flags-296466       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:25 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=Environment                         |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-338754                      | minikube                  | jenkins | v1.26.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:26 UTC |
	|         | --memory=2200 --vm-driver=kvm2                 |                           |         |         |                     |                     |
	|         |                                                |                           |         |         |                     |                     |
	| ssh     | docker-flags-296466 ssh                        | docker-flags-296466       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:25 UTC |
	|         | sudo systemctl show docker                     |                           |         |         |                     |                     |
	|         | --property=ExecStart                           |                           |         |         |                     |                     |
	|         | --no-pager                                     |                           |         |         |                     |                     |
	| delete  | -p docker-flags-296466                         | docker-flags-296466       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:25 UTC |
	| start   | -p cert-options-556865                         | cert-options-556865       | jenkins | v1.32.0 | 29 Feb 24 18:25 UTC | 29 Feb 24 18:26 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                  |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                    |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com               |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-338754 stop                    | minikube                  | jenkins | v1.26.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:26 UTC |
	| start   | -p stopped-upgrade-338754                      | stopped-upgrade-338754    | jenkins | v1.32.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:27 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| ssh     | cert-options-556865 ssh                        | cert-options-556865       | jenkins | v1.32.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:26 UTC |
	|         | openssl x509 -text -noout -in                  |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt          |                           |         |         |                     |                     |
	| ssh     | -p cert-options-556865 -- sudo                 | cert-options-556865       | jenkins | v1.32.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:26 UTC |
	|         | cat /etc/kubernetes/admin.conf                 |                           |         |         |                     |                     |
	| delete  | -p cert-options-556865                         | cert-options-556865       | jenkins | v1.32.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:26 UTC |
	| start   | -p gvisor-859306 --memory=2200                 | gvisor-859306             | jenkins | v1.32.0 | 29 Feb 24 18:26 UTC | 29 Feb 24 18:27 UTC |
	|         | --container-runtime=containerd --docker-opt    |                           |         |         |                     |                     |
	|         | containerd=/var/run/containerd/containerd.sock |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-235196                   | kubernetes-upgrade-235196 | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC | 29 Feb 24 18:27 UTC |
	| start   | -p kubernetes-upgrade-235196                   | kubernetes-upgrade-235196 | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC | 29 Feb 24 18:28 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2              |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| cache   | gvisor-859306 cache add                        | gvisor-859306             | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC | 29 Feb 24 18:28 UTC |
	|         | gcr.io/k8s-minikube/gvisor-addon:2             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-325534                      | cert-expiration-325534    | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC | 29 Feb 24 18:28 UTC |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                        |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-338754                      | stopped-upgrade-338754    | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC | 29 Feb 24 18:27 UTC |
	| start   | -p force-systemd-env-887530                    | force-systemd-env-887530  | jenkins | v1.32.0 | 29 Feb 24 18:27 UTC |                     |
	|         | --memory=2048                                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| addons  | gvisor-859306 addons enable                    | gvisor-859306             | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC | 29 Feb 24 18:28 UTC |
	|         | gvisor                                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-235196                   | kubernetes-upgrade-235196 | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC |                     |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-235196                   | kubernetes-upgrade-235196 | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC | 29 Feb 24 18:28 UTC |
	|         | --memory=2200                                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2              |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	| stop    | -p gvisor-859306                               | gvisor-859306             | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC |                     |
	| delete  | -p cert-expiration-325534                      | cert-expiration-325534    | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC | 29 Feb 24 18:28 UTC |
	| start   | -p auto-911469 --memory=3072                   | auto-911469               | jenkins | v1.32.0 | 29 Feb 24 18:28 UTC |                     |
	|         | --alsologtostderr --wait=true                  |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                             |                           |         |         |                     |                     |
	|         | --driver=kvm2                                  |                           |         |         |                     |                     |
	|---------|------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:28:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:28:33.456666   42439 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:28:33.456856   42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:28:33.456870   42439 out.go:304] Setting ErrFile to fd 2...
	I0229 18:28:33.456877   42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:28:33.457186   42439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:28:33.458135   42439 out.go:298] Setting JSON to false
	I0229 18:28:33.459596   42439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4264,"bootTime":1709227050,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:28:33.459702   42439 start.go:139] virtualization: kvm guest
	I0229 18:28:33.462146   42439 out.go:177] * [auto-911469] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:28:33.463734   42439 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:28:33.463754   42439 notify.go:220] Checking for updates...
	I0229 18:28:33.465311   42439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:28:33.467029   42439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:28:33.468618   42439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:28:33.470230   42439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:28:33.471789   42439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:28:33.473644   42439 config.go:182] Loaded profile config "force-systemd-env-887530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:28:33.473757   42439 config.go:182] Loaded profile config "gvisor-859306": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0229 18:28:33.473866   42439 config.go:182] Loaded profile config "kubernetes-upgrade-235196": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:28:33.473984   42439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:28:33.516392   42439 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:28:33.517702   42439 start.go:299] selected driver: kvm2
	I0229 18:28:33.517727   42439 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:28:33.517742   42439 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:28:33.518503   42439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:28:33.518639   42439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:28:33.535112   42439 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:28:33.535179   42439 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:28:33.535506   42439 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:28:33.535598   42439 cni.go:84] Creating CNI manager for ""
	I0229 18:28:33.535627   42439 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:28:33.535657   42439 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 18:28:33.535674   42439 start_flags.go:323] config:
	{Name:auto-911469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-911469 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:28:33.535882   42439 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:28:33.537859   42439 out.go:177] * Starting control plane node auto-911469 in cluster auto-911469
	I0229 18:28:33.539305   42439 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 18:28:33.539350   42439 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 18:28:33.539362   42439 cache.go:56] Caching tarball of preloaded images
	I0229 18:28:33.539469   42439 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:28:33.539484   42439 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 18:28:33.539596   42439 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/config.json ...
	I0229 18:28:33.539619   42439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/config.json: {Name:mk4e33a157a021547c2491aa2b7a1ab107a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:33.539821   42439 start.go:365] acquiring machines lock for auto-911469: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:28:33.539864   42439 start.go:369] acquired machines lock for "auto-911469" in 22.779µs
	I0229 18:28:33.539888   42439 start.go:93] Provisioning new machine with config: &{Name:auto-911469 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-911469 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:28:33.540003   42439 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:28:32.666961   41783 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.519754297s)
	I0229 18:28:32.667062   41783 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:28:32.693394   41783 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:28:32.693423   41783 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:28:32.693474   41783 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:28:32.725665   41783 cni.go:84] Creating CNI manager for ""
	I0229 18:28:32.725697   41783 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:28:32.725718   41783 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:28:32.725738   41783 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.4 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-887530 NodeName:force-systemd-env-887530 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.4 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:28:32.725917   41783 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-887530"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:28:32.726007   41783 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=force-systemd-env-887530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-887530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:28:32.726069   41783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:28:32.739020   41783 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:28:32.739139   41783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:28:32.752166   41783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I0229 18:28:32.771260   41783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:28:32.792835   41783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0229 18:28:32.843747   41783 ssh_runner.go:195] Run: grep 192.168.50.4	control-plane.minikube.internal$ /etc/hosts
	I0229 18:28:32.848296   41783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:28:32.862816   41783 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530 for IP: 192.168.50.4
	I0229 18:28:32.862852   41783 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:32.862998   41783 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:28:32.863038   41783 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:28:32.863083   41783 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key
	I0229 18:28:32.863098   41783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt with IP's: []
	I0229 18:28:32.976582   41783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt ...
	I0229 18:28:32.976610   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt: {Name:mk373945ba14f0c9fc55243edf98ce70e8be787b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:32.976779   41783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key ...
	I0229 18:28:32.976792   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key: {Name:mkb09509bbdf29ea92548b7b74134fd259461cfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:32.976869   41783 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key.982e8d65
	I0229 18:28:32.976884   41783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt.982e8d65 with IP's: [192.168.50.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:28:33.088087   41783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt.982e8d65 ...
	I0229 18:28:33.088112   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt.982e8d65: {Name:mkc87cb6c3087e41c02254929bdbb4dfe0d677e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:33.088293   41783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key.982e8d65 ...
	I0229 18:28:33.088312   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key.982e8d65: {Name:mkf1550743a8f741c6dba98eaaa25341069dd86f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:33.088407   41783 certs.go:337] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt.982e8d65 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt
	I0229 18:28:33.088502   41783 certs.go:341] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key.982e8d65 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key
	I0229 18:28:33.088558   41783 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.key
	I0229 18:28:33.088573   41783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.crt with IP's: []
	I0229 18:28:33.176751   41783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.crt ...
	I0229 18:28:33.176784   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.crt: {Name:mke85eceffcd62326af3f790ae9948361082defd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:33.176953   41783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.key ...
	I0229 18:28:33.176970   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.key: {Name:mk6cfebbb9028ebcbbb7d4b7bd7552f642b80613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:33.177065   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:28:33.177089   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:28:33.177099   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:28:33.177113   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:28:33.177125   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:28:33.177142   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:28:33.177157   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:28:33.177173   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:28:33.177236   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:28:33.177289   41783 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:28:33.177303   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:28:33.177338   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:28:33.177369   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:28:33.177399   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:28:33.177472   41783 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:28:33.177512   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:33.177532   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem -> /usr/share/ca-certificates/13605.pem
	I0229 18:28:33.177550   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> /usr/share/ca-certificates/136052.pem
	I0229 18:28:33.178417   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:28:33.205864   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:28:33.232404   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:28:33.257142   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:28:33.286450   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:28:33.314850   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:28:33.343305   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:28:33.369622   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:28:33.401291   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:28:33.429973   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:28:33.461972   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:28:33.491324   41783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:28:33.512618   41783 ssh_runner.go:195] Run: openssl version
	I0229 18:28:33.518812   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:28:33.532320   41783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:33.537903   41783 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:33.537968   41783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:33.545395   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:28:33.557444   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:28:33.570422   41783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:28:33.577239   41783 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:28:33.577315   41783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:28:33.587162   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:28:33.602537   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:28:33.617316   41783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:28:33.623160   41783 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:28:33.623215   41783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:28:33.629430   41783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:28:33.642398   41783 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:28:33.646975   41783 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:28:33.647036   41783 kubeadm.go:404] StartCluster: {Name:force-systemd-env-887530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:force-systemd-env-887530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:28:33.647195   41783 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:28:33.674320   41783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:28:33.689806   41783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:28:33.700887   41783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:28:33.711764   41783 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:28:33.711830   41783 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:28:33.832413   41783 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:28:33.832549   41783 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:28:34.112849   41783 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:28:34.112986   41783 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:28:34.113106   41783 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:28:34.515891   41783 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:28:34.518359   41783 out.go:204]   - Generating certificates and keys ...
	I0229 18:28:34.522223   41783 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:28:34.522304   41783 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:28:34.863718   41783 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:28:35.029903   41783 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:28:35.257078   41783 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:28:35.324520   41783 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:28:35.467579   41783 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:28:35.467812   41783 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-887530 localhost] and IPs [192.168.50.4 127.0.0.1 ::1]
	I0229 18:28:35.959606   41783 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:28:35.960188   41783 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-887530 localhost] and IPs [192.168.50.4 127.0.0.1 ::1]
	I0229 18:28:36.555136   41783 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:28:36.643171   41783 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:28:36.689236   41783 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:28:36.689540   41783 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:28:36.946687   41783 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:28:37.091127   41783 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:28:37.208963   41783 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:28:37.699779   41783 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:28:37.701055   41783 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:28:37.705380   41783 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:28:33.541873   42439 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 18:28:33.542027   42439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:33.542074   42439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:33.557327   42439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0229 18:28:33.557825   42439 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:33.558416   42439 main.go:141] libmachine: Using API Version  1
	I0229 18:28:33.558445   42439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:33.558825   42439 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:33.559024   42439 main.go:141] libmachine: (auto-911469) Calling .GetMachineName
	I0229 18:28:33.559198   42439 main.go:141] libmachine: (auto-911469) Calling .DriverName
	I0229 18:28:33.559373   42439 start.go:159] libmachine.API.Create for "auto-911469" (driver="kvm2")
	I0229 18:28:33.559421   42439 client.go:168] LocalClient.Create starting
	I0229 18:28:33.559463   42439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem
	I0229 18:28:33.559508   42439 main.go:141] libmachine: Decoding PEM data...
	I0229 18:28:33.559528   42439 main.go:141] libmachine: Parsing certificate...
	I0229 18:28:33.559593   42439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem
	I0229 18:28:33.559619   42439 main.go:141] libmachine: Decoding PEM data...
	I0229 18:28:33.559634   42439 main.go:141] libmachine: Parsing certificate...
	I0229 18:28:33.559678   42439 main.go:141] libmachine: Running pre-create checks...
	I0229 18:28:33.559691   42439 main.go:141] libmachine: (auto-911469) Calling .PreCreateCheck
	I0229 18:28:33.560117   42439 main.go:141] libmachine: (auto-911469) Calling .GetConfigRaw
	I0229 18:28:33.560581   42439 main.go:141] libmachine: Creating machine...
	I0229 18:28:33.560600   42439 main.go:141] libmachine: (auto-911469) Calling .Create
	I0229 18:28:33.560741   42439 main.go:141] libmachine: (auto-911469) Creating KVM machine...
	I0229 18:28:33.562243   42439 main.go:141] libmachine: (auto-911469) DBG | found existing default KVM network
	I0229 18:28:33.563896   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:33.563735   42461 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:68:d8:45} reservation:<nil>}
	I0229 18:28:33.565272   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:33.565133   42461 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b5:75:c7} reservation:<nil>}
	I0229 18:28:33.566323   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:33.566245   42461 network.go:207] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002852a0}
	I0229 18:28:33.571931   42439 main.go:141] libmachine: (auto-911469) DBG | trying to create private KVM network mk-auto-911469 192.168.61.0/24...
	I0229 18:28:33.649469   42439 main.go:141] libmachine: (auto-911469) DBG | private KVM network mk-auto-911469 192.168.61.0/24 created
	I0229 18:28:33.649504   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:33.649436   42461 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:28:33.649522   42439 main.go:141] libmachine: (auto-911469) Setting up store path in /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469 ...
	I0229 18:28:33.649540   42439 main.go:141] libmachine: (auto-911469) Building disk image from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:28:33.649684   42439 main.go:141] libmachine: (auto-911469) Downloading /home/jenkins/minikube-integration/18259-6402/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:28:33.922876   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:33.922739   42461 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469/id_rsa...
	I0229 18:28:34.107171   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:34.106985   42461 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469/auto-911469.rawdisk...
	I0229 18:28:34.107217   42439 main.go:141] libmachine: (auto-911469) DBG | Writing magic tar header
	I0229 18:28:34.107231   42439 main.go:141] libmachine: (auto-911469) DBG | Writing SSH key tar header
	I0229 18:28:34.107243   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:34.107109   42461 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469 ...
	I0229 18:28:34.107256   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469 (perms=drwx------)
	I0229 18:28:34.107271   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:28:34.107280   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube (perms=drwxr-xr-x)
	I0229 18:28:34.107296   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402 (perms=drwxrwxr-x)
	I0229 18:28:34.107309   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:28:34.107320   42439 main.go:141] libmachine: (auto-911469) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:28:34.107349   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469
	I0229 18:28:34.107359   42439 main.go:141] libmachine: (auto-911469) Creating domain...
	I0229 18:28:34.107376   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines
	I0229 18:28:34.107388   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:28:34.107402   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402
	I0229 18:28:34.107410   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:28:34.107457   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:28:34.107494   42439 main.go:141] libmachine: (auto-911469) DBG | Checking permissions on dir: /home
	I0229 18:28:34.107507   42439 main.go:141] libmachine: (auto-911469) DBG | Skipping /home - not owner
	I0229 18:28:34.108720   42439 main.go:141] libmachine: (auto-911469) define libvirt domain using xml: 
	I0229 18:28:34.108771   42439 main.go:141] libmachine: (auto-911469) <domain type='kvm'>
	I0229 18:28:34.108798   42439 main.go:141] libmachine: (auto-911469)   <name>auto-911469</name>
	I0229 18:28:34.108810   42439 main.go:141] libmachine: (auto-911469)   <memory unit='MiB'>3072</memory>
	I0229 18:28:34.108819   42439 main.go:141] libmachine: (auto-911469)   <vcpu>2</vcpu>
	I0229 18:28:34.108826   42439 main.go:141] libmachine: (auto-911469)   <features>
	I0229 18:28:34.108833   42439 main.go:141] libmachine: (auto-911469)     <acpi/>
	I0229 18:28:34.108839   42439 main.go:141] libmachine: (auto-911469)     <apic/>
	I0229 18:28:34.108846   42439 main.go:141] libmachine: (auto-911469)     <pae/>
	I0229 18:28:34.108855   42439 main.go:141] libmachine: (auto-911469)     
	I0229 18:28:34.108884   42439 main.go:141] libmachine: (auto-911469)   </features>
	I0229 18:28:34.108905   42439 main.go:141] libmachine: (auto-911469)   <cpu mode='host-passthrough'>
	I0229 18:28:34.108913   42439 main.go:141] libmachine: (auto-911469)   
	I0229 18:28:34.108924   42439 main.go:141] libmachine: (auto-911469)   </cpu>
	I0229 18:28:34.108943   42439 main.go:141] libmachine: (auto-911469)   <os>
	I0229 18:28:34.108953   42439 main.go:141] libmachine: (auto-911469)     <type>hvm</type>
	I0229 18:28:34.108961   42439 main.go:141] libmachine: (auto-911469)     <boot dev='cdrom'/>
	I0229 18:28:34.108976   42439 main.go:141] libmachine: (auto-911469)     <boot dev='hd'/>
	I0229 18:28:34.108999   42439 main.go:141] libmachine: (auto-911469)     <bootmenu enable='no'/>
	I0229 18:28:34.109011   42439 main.go:141] libmachine: (auto-911469)   </os>
	I0229 18:28:34.109024   42439 main.go:141] libmachine: (auto-911469)   <devices>
	I0229 18:28:34.109038   42439 main.go:141] libmachine: (auto-911469)     <disk type='file' device='cdrom'>
	I0229 18:28:34.109051   42439 main.go:141] libmachine: (auto-911469)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469/boot2docker.iso'/>
	I0229 18:28:34.109059   42439 main.go:141] libmachine: (auto-911469)       <target dev='hdc' bus='scsi'/>
	I0229 18:28:34.109066   42439 main.go:141] libmachine: (auto-911469)       <readonly/>
	I0229 18:28:34.109078   42439 main.go:141] libmachine: (auto-911469)     </disk>
	I0229 18:28:34.109089   42439 main.go:141] libmachine: (auto-911469)     <disk type='file' device='disk'>
	I0229 18:28:34.109120   42439 main.go:141] libmachine: (auto-911469)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:28:34.109148   42439 main.go:141] libmachine: (auto-911469)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/auto-911469/auto-911469.rawdisk'/>
	I0229 18:28:34.109162   42439 main.go:141] libmachine: (auto-911469)       <target dev='hda' bus='virtio'/>
	I0229 18:28:34.109173   42439 main.go:141] libmachine: (auto-911469)     </disk>
	I0229 18:28:34.109184   42439 main.go:141] libmachine: (auto-911469)     <interface type='network'>
	I0229 18:28:34.109194   42439 main.go:141] libmachine: (auto-911469)       <source network='mk-auto-911469'/>
	I0229 18:28:34.109217   42439 main.go:141] libmachine: (auto-911469)       <model type='virtio'/>
	I0229 18:28:34.109236   42439 main.go:141] libmachine: (auto-911469)     </interface>
	I0229 18:28:34.109249   42439 main.go:141] libmachine: (auto-911469)     <interface type='network'>
	I0229 18:28:34.109260   42439 main.go:141] libmachine: (auto-911469)       <source network='default'/>
	I0229 18:28:34.109288   42439 main.go:141] libmachine: (auto-911469)       <model type='virtio'/>
	I0229 18:28:34.109299   42439 main.go:141] libmachine: (auto-911469)     </interface>
	I0229 18:28:34.109314   42439 main.go:141] libmachine: (auto-911469)     <serial type='pty'>
	I0229 18:28:34.109331   42439 main.go:141] libmachine: (auto-911469)       <target port='0'/>
	I0229 18:28:34.109341   42439 main.go:141] libmachine: (auto-911469)     </serial>
	I0229 18:28:34.109346   42439 main.go:141] libmachine: (auto-911469)     <console type='pty'>
	I0229 18:28:34.109351   42439 main.go:141] libmachine: (auto-911469)       <target type='serial' port='0'/>
	I0229 18:28:34.109358   42439 main.go:141] libmachine: (auto-911469)     </console>
	I0229 18:28:34.109381   42439 main.go:141] libmachine: (auto-911469)     <rng model='virtio'>
	I0229 18:28:34.109402   42439 main.go:141] libmachine: (auto-911469)       <backend model='random'>/dev/random</backend>
	I0229 18:28:34.109414   42439 main.go:141] libmachine: (auto-911469)     </rng>
	I0229 18:28:34.109423   42439 main.go:141] libmachine: (auto-911469)     
	I0229 18:28:34.109431   42439 main.go:141] libmachine: (auto-911469)     
	I0229 18:28:34.109440   42439 main.go:141] libmachine: (auto-911469)   </devices>
	I0229 18:28:34.109448   42439 main.go:141] libmachine: (auto-911469) </domain>
	I0229 18:28:34.109458   42439 main.go:141] libmachine: (auto-911469) 
	I0229 18:28:34.113998   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:89:df:09 in network default
	I0229 18:28:34.114573   42439 main.go:141] libmachine: (auto-911469) Ensuring networks are active...
	I0229 18:28:34.114599   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:34.115362   42439 main.go:141] libmachine: (auto-911469) Ensuring network default is active
	I0229 18:28:34.115809   42439 main.go:141] libmachine: (auto-911469) Ensuring network mk-auto-911469 is active
	I0229 18:28:34.116516   42439 main.go:141] libmachine: (auto-911469) Getting domain xml...
	I0229 18:28:34.117313   42439 main.go:141] libmachine: (auto-911469) Creating domain...
	I0229 18:28:35.387828   42439 main.go:141] libmachine: (auto-911469) Waiting to get IP...
	I0229 18:28:35.388773   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:35.389257   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:35.389286   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:35.389242   42461 retry.go:31] will retry after 286.272194ms: waiting for machine to come up
	I0229 18:28:35.678018   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:35.679056   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:35.679103   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:35.678969   42461 retry.go:31] will retry after 367.356429ms: waiting for machine to come up
	I0229 18:28:36.047580   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:36.048146   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:36.048192   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:36.048103   42461 retry.go:31] will retry after 293.596206ms: waiting for machine to come up
	I0229 18:28:36.343821   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:36.344518   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:36.344558   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:36.344497   42461 retry.go:31] will retry after 436.001835ms: waiting for machine to come up
	I0229 18:28:36.781998   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:36.782593   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:36.782621   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:36.782539   42461 retry.go:31] will retry after 548.99601ms: waiting for machine to come up
	I0229 18:28:37.333497   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:37.334018   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:37.334045   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:37.333979   42461 retry.go:31] will retry after 816.242821ms: waiting for machine to come up
	I0229 18:28:38.151550   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:38.152166   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:38.152193   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:38.152118   42461 retry.go:31] will retry after 1.005274973s: waiting for machine to come up
	I0229 18:28:37.956935   42268 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.938884214s)
	I0229 18:28:37.956996   42268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:28:37.985153   42268 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0229 18:28:38.021100   42268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:28:38.041384   42268 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:28:38.196463   42268 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:28:38.357169   42268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:28:38.540985   42268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:28:38.562823   42268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:28:38.582018   42268 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:28:38.738943   42268 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:28:38.836964   42268 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:28:38.837034   42268 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:28:38.846990   42268 start.go:543] Will wait 60s for crictl version
	I0229 18:28:38.847067   42268 ssh_runner.go:195] Run: which crictl
	I0229 18:28:38.851863   42268 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:28:38.907212   42268 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:28:38.907285   42268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:28:38.932760   42268 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:28:37.707854   41783 out.go:204]   - Booting up control plane ...
	I0229 18:28:37.708008   41783 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:28:37.708128   41783 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:28:37.708412   41783 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:28:37.732626   41783 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:28:37.736499   41783 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:28:37.736581   41783 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:28:37.931148   41783 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:28:38.964243   42268 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 18:28:38.964293   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetIP
	I0229 18:28:38.966821   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:38.967102   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:27:40 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:28:38.967142   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:38.967375   42268 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:28:38.972453   42268 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:28:38.972533   42268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:28:38.998087   42268 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:28:38.998122   42268 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:28:38.998184   42268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:28:39.020671   42268 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:28:39.020703   42268 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:28:39.020792   42268 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:28:39.080547   42268 cni.go:84] Creating CNI manager for ""
	I0229 18:28:39.080580   42268 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:28:39.080596   42268 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:28:39.080629   42268 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.169 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-235196 NodeName:kubernetes-upgrade-235196 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:28:39.080821   42268 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-235196"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:28:39.080924   42268 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-235196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:28:39.080994   42268 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:28:39.095520   42268 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:28:39.095606   42268 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:28:39.107562   42268 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (393 bytes)
	I0229 18:28:39.133790   42268 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:28:39.158255   42268 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2119 bytes)
	I0229 18:28:39.184192   42268 ssh_runner.go:195] Run: grep 192.168.72.169	control-plane.minikube.internal$ /etc/hosts
	I0229 18:28:39.190587   42268 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196 for IP: 192.168.72.169
	I0229 18:28:39.190624   42268 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:39.190768   42268 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:28:39.190814   42268 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:28:39.190874   42268 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key
	I0229 18:28:39.190920   42268 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key.8ded931d
	I0229 18:28:39.190954   42268 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key
	I0229 18:28:39.191052   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:28:39.191087   42268 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:28:39.191097   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:28:39.191138   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:28:39.191166   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:28:39.191234   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:28:39.191309   42268 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:28:39.192019   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:28:39.224610   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:28:39.256121   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:28:39.289007   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:28:39.322816   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:28:39.356135   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:28:39.388486   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:28:39.419230   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:28:39.448799   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:28:39.478879   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:28:39.513221   42268 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:28:39.551955   42268 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:28:39.577535   42268 ssh_runner.go:195] Run: openssl version
	I0229 18:28:39.584179   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:28:39.596923   42268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:28:39.602704   42268 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:28:39.602773   42268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:28:39.609858   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:28:39.621930   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:28:39.638945   42268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:39.644664   42268 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:39.644725   42268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:28:39.653703   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:28:39.668096   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:28:39.688120   42268 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:28:39.694874   42268 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:28:39.694992   42268 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:28:39.703742   42268 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:28:39.718146   42268 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:28:39.724687   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:28:39.733225   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:28:39.743169   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:28:39.750366   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:28:39.759607   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:28:39.766162   42268 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:28:39.773314   42268 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-235196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-235196 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.169 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:28:39.773561   42268 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:28:39.794696   42268 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:28:39.807767   42268 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:28:39.807839   42268 kubeadm.go:636] restartCluster start
	I0229 18:28:39.807905   42268 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:28:39.820313   42268 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:39.820989   42268 kubeconfig.go:92] found "kubernetes-upgrade-235196" server: "https://192.168.72.169:8443"
	I0229 18:28:39.822074   42268 kapi.go:59] client config for kubernetes-upgrade-235196: &rest.Config{Host:"https://192.168.72.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:39.822770   42268 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:28:39.837179   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:39.837321   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:39.855927   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:40.337295   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:40.337372   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:40.355871   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:40.837374   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:40.837472   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:40.855362   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:41.338077   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:41.338160   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:41.352682   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:41.837881   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:41.837985   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:41.856290   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:42.337597   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:42.337704   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:42.356691   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:39.158771   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:39.159367   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:39.159393   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:39.159292   42461 retry.go:31] will retry after 1.018990334s: waiting for machine to come up
	I0229 18:28:40.179576   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:40.180215   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:40.180238   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:40.180165   42461 retry.go:31] will retry after 1.191052715s: waiting for machine to come up
	I0229 18:28:41.372610   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:41.373161   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:41.373191   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:41.373110   42461 retry.go:31] will retry after 1.715838113s: waiting for machine to come up
	I0229 18:28:43.090766   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:43.091505   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:43.091555   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:43.091421   42461 retry.go:31] will retry after 1.789856666s: waiting for machine to come up
	I0229 18:28:45.441769   41783 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.509618 seconds
	I0229 18:28:45.441942   41783 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:28:45.468998   41783 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:28:46.015547   41783 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:28:46.015811   41783 kubeadm.go:322] [mark-control-plane] Marking the node force-systemd-env-887530 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:28:46.536478   41783 kubeadm.go:322] [bootstrap-token] Using token: krqulc.2sgkat05bauje15f
	I0229 18:28:46.538041   41783 out.go:204]   - Configuring RBAC rules ...
	I0229 18:28:46.538185   41783 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:28:46.553496   41783 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:28:46.568077   41783 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:28:46.573586   41783 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:28:46.582838   41783 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:28:46.592999   41783 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:28:46.642211   41783 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:28:47.061736   41783 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:28:47.140491   41783 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:28:47.148774   41783 kubeadm.go:322] 
	I0229 18:28:47.148875   41783 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:28:47.148885   41783 kubeadm.go:322] 
	I0229 18:28:47.148983   41783 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:28:47.148991   41783 kubeadm.go:322] 
	I0229 18:28:47.149047   41783 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:28:47.149401   41783 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:28:47.149474   41783 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:28:47.149481   41783 kubeadm.go:322] 
	I0229 18:28:47.149549   41783 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:28:47.149555   41783 kubeadm.go:322] 
	I0229 18:28:47.149625   41783 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:28:47.149632   41783 kubeadm.go:322] 
	I0229 18:28:47.149702   41783 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:28:47.149796   41783 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:28:47.149877   41783 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:28:47.149885   41783 kubeadm.go:322] 
	I0229 18:28:47.150015   41783 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:28:47.150112   41783 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:28:47.150119   41783 kubeadm.go:322] 
	I0229 18:28:47.150224   41783 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token krqulc.2sgkat05bauje15f \
	I0229 18:28:47.150354   41783 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:98ccf043fa74cca6041cd089e853b2c24813837c0f5e6536c32e208744fc3c70 \
	I0229 18:28:47.150394   41783 kubeadm.go:322] 	--control-plane 
	I0229 18:28:47.150401   41783 kubeadm.go:322] 
	I0229 18:28:47.150519   41783 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:28:47.150526   41783 kubeadm.go:322] 
	I0229 18:28:47.150626   41783 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token krqulc.2sgkat05bauje15f \
	I0229 18:28:47.150762   41783 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:98ccf043fa74cca6041cd089e853b2c24813837c0f5e6536c32e208744fc3c70 
	I0229 18:28:47.155725   41783 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:28:47.155843   41783 cni.go:84] Creating CNI manager for ""
	I0229 18:28:47.155864   41783 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:28:47.157999   41783 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:28:42.837338   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:42.837470   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:42.852567   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:43.338249   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:43.338391   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:43.359019   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:43.837524   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:43.837634   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:43.854345   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:44.338038   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:44.338136   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:44.357473   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:44.838101   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:44.838204   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:44.856478   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:45.338074   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:45.338262   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:45.365616   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:45.837264   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:45.837359   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:45.885730   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:46.338271   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:46.338386   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:28:46.359298   42268 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:46.837856   42268 api_server.go:166] Checking apiserver status ...
	I0229 18:28:46.838026   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:28:46.863630   42268 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4165/cgroup
	W0229 18:28:46.882801   42268 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:28:46.882921   42268 ssh_runner.go:195] Run: ls
	I0229 18:28:46.892105   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:46.892965   42268 api_server.go:269] stopped: https://192.168.72.169:8443/healthz: Get "https://192.168.72.169:8443/healthz": dial tcp 192.168.72.169:8443: connect: connection refused
	I0229 18:28:46.893075   42268 retry.go:31] will retry after 245.814602ms: state is "Stopped"
	I0229 18:28:47.139481   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:47.159770   41783 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:28:47.198391   41783 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:28:47.240180   41783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:28:47.240354   41783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:28:47.240473   41783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=force-systemd-env-887530 minikube.k8s.io/updated_at=2024_02_29T18_28_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:28:47.819440   41783 ops.go:34] apiserver oom_adj: -16
	I0229 18:28:47.819483   41783 kubeadm.go:1088] duration metric: took 579.223507ms to wait for elevateKubeSystemPrivileges.
	I0229 18:28:47.819524   41783 kubeadm.go:406] StartCluster complete in 14.172490008s
	I0229 18:28:47.819560   41783 settings.go:142] acquiring lock: {Name:mk85324150508323d0a817853e472a1fdcadc314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:47.819652   41783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:28:47.821499   41783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:47.821775   41783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:28:47.821839   41783 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:28:47.821926   41783 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-env-887530"
	I0229 18:28:47.821944   41783 addons.go:234] Setting addon storage-provisioner=true in "force-systemd-env-887530"
	I0229 18:28:47.821997   41783 config.go:182] Loaded profile config "force-systemd-env-887530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:28:47.822007   41783 host.go:66] Checking if "force-systemd-env-887530" exists ...
	I0229 18:28:47.822063   41783 cache.go:107] acquiring lock: {Name:mk0db597c024ca72f3d806b204928d2d6d5c0ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:28:47.822140   41783 cache.go:115] /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 18:28:47.822153   41783 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 94.312µs
	I0229 18:28:47.822164   41783 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 18:28:47.822171   41783 cache.go:87] Successfully saved all images to host disk.
	I0229 18:28:47.822327   41783 config.go:182] Loaded profile config "force-systemd-env-887530": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:28:47.822379   41783 addons.go:69] Setting default-storageclass=true in profile "force-systemd-env-887530"
	I0229 18:28:47.822404   41783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-887530"
	I0229 18:28:47.822431   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.822467   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.822675   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.822704   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.822812   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.822852   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.822867   41783 kapi.go:59] client config for force-systemd-env-887530: &rest.Config{Host:"https://192.168.50.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:47.823409   41783 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 18:28:47.839717   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0229 18:28:47.840309   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.841128   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.841156   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.841350   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0229 18:28:47.841532   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.841833   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.841834   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetState
	I0229 18:28:47.842397   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.842415   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.842821   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.843025   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetState
	I0229 18:28:47.845499   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.845545   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.845494   41783 kapi.go:59] client config for force-systemd-env-887530: &rest.Config{Host:"https://192.168.50.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:47.845634   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0229 18:28:47.845829   41783 addons.go:234] Setting addon default-storageclass=true in "force-systemd-env-887530"
	I0229 18:28:47.845868   41783 host.go:66] Checking if "force-systemd-env-887530" exists ...
	I0229 18:28:47.845964   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.846299   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.846336   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.846393   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.846406   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.846888   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.847528   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.847565   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.865252   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37147
	I0229 18:28:47.865783   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.866305   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.866327   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.866743   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.866933   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .DriverName
	I0229 18:28:47.867220   41783 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:28:47.867257   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHHostname
	I0229 18:28:47.867321   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
	I0229 18:28:47.867807   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.868228   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.868244   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.868599   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.869176   41783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:47.869210   41783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:47.870383   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.870772   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:01:6c", ip: ""} in network mk-force-systemd-env-887530: {Iface:virbr2 ExpiryTime:2024-02-29 19:28:12 +0000 UTC Type:0 Mac:52:54:00:d5:01:6c Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:force-systemd-env-887530 Clientid:01:52:54:00:d5:01:6c}
	I0229 18:28:47.870810   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined IP address 192.168.50.4 and MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.870996   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHPort
	I0229 18:28:47.871163   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHKeyPath
	I0229 18:28:47.871420   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHUsername
	I0229 18:28:47.871581   41783 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/force-systemd-env-887530/id_rsa Username:docker}
	I0229 18:28:47.873740   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41381
	I0229 18:28:47.874348   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.874926   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.874944   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.875298   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.875486   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetState
	I0229 18:28:47.877185   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .DriverName
	I0229 18:28:47.879053   41783 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:28:47.880658   41783 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:28:47.880677   41783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:28:47.880694   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHHostname
	I0229 18:28:47.883920   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.884349   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:01:6c", ip: ""} in network mk-force-systemd-env-887530: {Iface:virbr2 ExpiryTime:2024-02-29 19:28:12 +0000 UTC Type:0 Mac:52:54:00:d5:01:6c Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:force-systemd-env-887530 Clientid:01:52:54:00:d5:01:6c}
	I0229 18:28:47.884369   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined IP address 192.168.50.4 and MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.884623   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHPort
	I0229 18:28:47.884800   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHKeyPath
	I0229 18:28:47.884925   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHUsername
	I0229 18:28:47.885024   41783 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/force-systemd-env-887530/id_rsa Username:docker}
	I0229 18:28:47.890759   41783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I0229 18:28:47.891234   41783 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:47.891810   41783 main.go:141] libmachine: Using API Version  1
	I0229 18:28:47.891827   41783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:47.892284   41783 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:47.892488   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetState
	I0229 18:28:47.894339   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .DriverName
	I0229 18:28:47.894585   41783 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:28:47.894600   41783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:28:47.894617   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHHostname
	I0229 18:28:47.897731   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.898214   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:01:6c", ip: ""} in network mk-force-systemd-env-887530: {Iface:virbr2 ExpiryTime:2024-02-29 19:28:12 +0000 UTC Type:0 Mac:52:54:00:d5:01:6c Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:force-systemd-env-887530 Clientid:01:52:54:00:d5:01:6c}
	I0229 18:28:47.898243   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | domain force-systemd-env-887530 has defined IP address 192.168.50.4 and MAC address 52:54:00:d5:01:6c in network mk-force-systemd-env-887530
	I0229 18:28:47.898389   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHPort
	I0229 18:28:47.898546   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHKeyPath
	I0229 18:28:47.898692   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .GetSSHUsername
	I0229 18:28:47.898810   41783 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/force-systemd-env-887530/id_rsa Username:docker}
	I0229 18:28:47.988474   41783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:28:48.074183   41783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:28:48.096464   41783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:28:48.365043   41783 kapi.go:248] "coredns" deployment in "kube-system" namespace and "force-systemd-env-887530" context rescaled to 1 replicas
	I0229 18:28:48.365103   41783 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:28:48.368229   41783 out.go:177] * Verifying Kubernetes components...
	I0229 18:28:44.883068   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:44.883630   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:44.883685   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:44.883540   42461 retry.go:31] will retry after 3.230181022s: waiting for machine to come up
	I0229 18:28:48.116982   42439 main.go:141] libmachine: (auto-911469) DBG | domain auto-911469 has defined MAC address 52:54:00:b9:36:a5 in network mk-auto-911469
	I0229 18:28:48.117566   42439 main.go:141] libmachine: (auto-911469) DBG | unable to find current IP address of domain auto-911469 in network mk-auto-911469
	I0229 18:28:48.117595   42439 main.go:141] libmachine: (auto-911469) DBG | I0229 18:28:48.117502   42461 retry.go:31] will retry after 4.156377155s: waiting for machine to come up
	I0229 18:28:48.369593   41783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:28:49.489403   41783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.50088571s)
	I0229 18:28:49.489489   41783 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.622246915s)
	I0229 18:28:49.489568   41783 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:28:49.489602   41783 docker.go:691] gcr.io/k8s-minikube/gvisor-addon:2 wasn't preloaded
	I0229 18:28:49.489616   41783 cache_images.go:88] LoadImages start: [gcr.io/k8s-minikube/gvisor-addon:2]
	I0229 18:28:49.489571   41783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.41534821s)
	I0229 18:28:49.489751   41783 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:49.489768   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .Close
	I0229 18:28:49.489493   41783 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0229 18:28:49.492126   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | Closing plugin on server side
	I0229 18:28:49.492137   41783 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:49.492153   41783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:49.492163   41783 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:49.492181   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .Close
	I0229 18:28:49.492547   41783 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:49.492560   41783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:49.493501   41783 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2
	I0229 18:28:49.500455   41783 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:49.500479   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .Close
	I0229 18:28:49.500828   41783 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:49.500869   41783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:49.500834   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | Closing plugin on server side
	I0229 18:28:49.749625   41783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.653120809s)
	I0229 18:28:49.749735   41783 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:49.749786   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .Close
	I0229 18:28:49.749956   41783 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.380335597s)
	I0229 18:28:49.750327   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | Closing plugin on server side
	I0229 18:28:49.750351   41783 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:49.750405   41783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:49.750425   41783 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:49.750442   41783 main.go:141] libmachine: (force-systemd-env-887530) Calling .Close
	I0229 18:28:49.750447   41783 cache_images.go:116] "gcr.io/k8s-minikube/gvisor-addon:2" needs transfer: "gcr.io/k8s-minikube/gvisor-addon:2" does not exist at hash "sha256:140546f59eee9ea4150fded3530ee088aa3086d6cf8ecc74dc790a6f13eb733a" in container runtime
	I0229 18:28:49.750704   41783 docker.go:337] Removing image: gcr.io/k8s-minikube/gvisor-addon:2
	I0229 18:28:49.750751   41783 main.go:141] libmachine: (force-systemd-env-887530) DBG | Closing plugin on server side
	I0229 18:28:49.750755   41783 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/gvisor-addon:2
	I0229 18:28:49.750842   41783 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:49.750878   41783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:49.752913   41783 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0229 18:28:50.087787   42268 api_server.go:279] https://192.168.72.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:28:50.087831   42268 retry.go:31] will retry after 238.678518ms: https://192.168.72.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:28:50.327033   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:50.334938   42268 api_server.go:279] https://192.168.72.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:28:50.334975   42268 retry.go:31] will retry after 487.558889ms: https://192.168.72.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:28:50.823358   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:50.829847   42268 api_server.go:279] https://192.168.72.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:28:50.829886   42268 retry.go:31] will retry after 518.125689ms: https://192.168.72.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:28:51.348613   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:51.353319   42268 api_server.go:279] https://192.168.72.169:8443/healthz returned 200:
	ok
	I0229 18:28:51.374920   42268 system_pods.go:86] 7 kube-system pods found
	I0229 18:28:51.375021   42268 system_pods.go:89] "coredns-76f75df574-sbvqq" [b53d268d-30e2-4ef4-b989-590ca143572e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:28:51.375039   42268 system_pods.go:89] "etcd-kubernetes-upgrade-235196" [b09bd6f6-d80d-45ea-b13b-87e3ddbdac93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:28:51.375052   42268 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-235196" [e636cefd-7a5f-48fa-8152-4490dcb48887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:28:51.375070   42268 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-235196" [cb95ec3c-6111-4c29-933a-1821ecdb0384] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:28:51.375082   42268 system_pods.go:89] "kube-proxy-tkwbc" [03f8e45c-3782-42f9-a3c3-0cc6bd7f7e3e] Running
	I0229 18:28:51.375094   42268 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-235196" [45b02b9a-cc89-4c2e-a9c6-992208380d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:28:51.375104   42268 system_pods.go:89] "storage-provisioner" [c35d2178-6a93-4965-9081-8bba8a012556] Running
	I0229 18:28:51.376478   42268 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:28:51.376500   42268 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.169
	I0229 18:28:51.376508   42268 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 18:28:51.376512   42268 kubeadm.go:640] restartCluster took 11.568657123s
	I0229 18:28:51.376517   42268 kubeadm.go:406] StartCluster complete in 11.60321318s
	I0229 18:28:51.376531   42268 settings.go:142] acquiring lock: {Name:mk85324150508323d0a817853e472a1fdcadc314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:51.376601   42268 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:28:51.377805   42268 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:28:51.378013   42268 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:28:51.378177   42268 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:28:51.378262   42268 config.go:182] Loaded profile config "kubernetes-upgrade-235196": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:28:51.378278   42268 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-235196"
	I0229 18:28:51.378297   42268 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-235196"
	I0229 18:28:51.378266   42268 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-235196"
	I0229 18:28:51.378331   42268 cache.go:107] acquiring lock: {Name:mk0db597c024ca72f3d806b204928d2d6d5c0ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:28:51.378384   42268 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-235196"
	W0229 18:28:51.378397   42268 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:28:51.378422   42268 cache.go:115] /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 18:28:51.378433   42268 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 106.315µs
	I0229 18:28:51.378442   42268 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 18:28:51.378450   42268 cache.go:87] Successfully saved all images to host disk.
	I0229 18:28:51.378497   42268 host.go:66] Checking if "kubernetes-upgrade-235196" exists ...
	I0229 18:28:51.378606   42268 config.go:182] Loaded profile config "kubernetes-upgrade-235196": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:28:51.378799   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.378845   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.378883   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.378914   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.378974   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.378999   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.379085   42268 kapi.go:59] client config for kubernetes-upgrade-235196: &rest.Config{Host:"https://192.168.72.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:51.399022   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0229 18:28:51.399253   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0229 18:28:51.399454   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.399614   42268 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-235196" context rescaled to 1 replicas
	I0229 18:28:51.399671   42268 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.169 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:28:51.401801   42268 out.go:177] * Verifying Kubernetes components...
	I0229 18:28:51.399936   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.400025   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.401752   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
	I0229 18:28:51.403222   42268 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:28:51.403225   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.403672   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.403690   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.403731   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.404120   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.404344   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.404366   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.404387   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetState
	I0229 18:28:51.404747   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.404929   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetState
	I0229 18:28:51.405441   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.406023   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.406072   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.407016   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.407052   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.408442   42268 kapi.go:59] client config for kubernetes-upgrade-235196: &rest.Config{Host:"https://192.168.72.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubernetes-upgrade-235196/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:51.408823   42268 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-235196"
	W0229 18:28:51.408841   42268 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:28:51.408867   42268 host.go:66] Checking if "kubernetes-upgrade-235196" exists ...
	I0229 18:28:51.409273   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.409314   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.427314   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
	I0229 18:28:51.427937   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.428459   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.428474   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.428858   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.429034   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:28:51.429200   42268 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:28:51.429217   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:28:51.429409   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0229 18:28:51.430354   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.430665   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39869
	I0229 18:28:51.431118   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.431813   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.431833   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.432207   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.432807   42268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:28:51.432845   42268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:28:51.433081   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.433116   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:27:40 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:28:51.433133   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.433265   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:28:51.433438   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:28:51.433564   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:28:51.433685   42268 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:28:51.440365   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.440391   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.440797   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.440983   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetState
	I0229 18:28:51.442719   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:28:51.444898   42268 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:28:49.751174   41783 kapi.go:59] client config for force-systemd-env-887530: &rest.Config{Host:"https://192.168.50.4:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.crt", KeyFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/profiles/force-systemd-env-887530/client.key", CAFile:"/home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5d0e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:28:49.754401   41783 addons.go:505] enable addons completed in 1.932567124s: enabled=[default-storageclass storage-provisioner]
	I0229 18:28:49.754709   41783 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:28:49.754771   41783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:28:49.794291   41783 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2
	I0229 18:28:49.794342   41783 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 -> /var/lib/minikube/images/gvisor-addon_2
	I0229 18:28:49.794423   41783 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/gvisor-addon_2
	I0229 18:28:49.808001   41783 api_server.go:72] duration metric: took 1.442853651s to wait for apiserver process to appear ...
	I0229 18:28:49.808038   41783 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:28:49.808059   41783 api_server.go:253] Checking apiserver healthz at https://192.168.50.4:8443/healthz ...
	I0229 18:28:49.808067   41783 ssh_runner.go:352] existence check for /var/lib/minikube/images/gvisor-addon_2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/gvisor-addon_2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/gvisor-addon_2': No such file or directory
	I0229 18:28:49.808105   41783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 --> /var/lib/minikube/images/gvisor-addon_2 (89375232 bytes)
	I0229 18:28:49.821172   41783 api_server.go:279] https://192.168.50.4:8443/healthz returned 200:
	ok
	I0229 18:28:49.824146   41783 api_server.go:141] control plane version: v1.28.4
	I0229 18:28:49.824171   41783 api_server.go:131] duration metric: took 16.125232ms to wait for apiserver health ...
	I0229 18:28:49.824182   41783 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:28:49.850484   41783 system_pods.go:59] 5 kube-system pods found
	I0229 18:28:49.850547   41783 system_pods.go:61] "etcd-force-systemd-env-887530" [05580391-c08a-4a10-8652-582ed9e50d55] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:28:49.850559   41783 system_pods.go:61] "kube-apiserver-force-systemd-env-887530" [569ab59e-1c1f-424f-9d8b-0d33ecb4f499] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:28:49.850571   41783 system_pods.go:61] "kube-controller-manager-force-systemd-env-887530" [9e302450-b180-4b2e-bc99-92964dd0eb05] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:28:49.850577   41783 system_pods.go:61] "kube-scheduler-force-systemd-env-887530" [2bd73623-69e8-49a0-bdfa-a182a4afabf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:28:49.850590   41783 system_pods.go:61] "storage-provisioner" [67f020a4-550d-4228-92af-a56726bd85fc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0229 18:28:49.850597   41783 system_pods.go:74] duration metric: took 26.408722ms to wait for pod list to return data ...
	I0229 18:28:49.850617   41783 kubeadm.go:581] duration metric: took 1.485484562s to wait for : map[apiserver:true system_pods:true] ...
	I0229 18:28:49.850640   41783 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:28:49.860658   41783 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:28:49.860694   41783 node_conditions.go:123] node cpu capacity is 2
	I0229 18:28:49.860716   41783 node_conditions.go:105] duration metric: took 10.058239ms to run NodePressure ...
	I0229 18:28:49.860731   41783 start.go:228] waiting for startup goroutines ...
	I0229 18:28:50.583244   41783 docker.go:304] Loading image: /var/lib/minikube/images/gvisor-addon_2
	I0229 18:28:50.583284   41783 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load"
	I0229 18:28:51.446463   42268 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:28:51.446481   42268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:28:51.446502   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:28:51.449725   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.449968   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:27:40 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:28:51.450051   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.450399   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:28:51.450549   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:28:51.450683   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:28:51.450805   42268 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:28:51.452337   42268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0229 18:28:51.452947   42268 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:28:51.453490   42268 main.go:141] libmachine: Using API Version  1
	I0229 18:28:51.453506   42268 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:28:51.453984   42268 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:28:51.454138   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetState
	I0229 18:28:51.455567   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .DriverName
	I0229 18:28:51.455946   42268 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:28:51.455962   42268 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:28:51.455979   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHHostname
	I0229 18:28:51.458787   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.459257   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:57:33", ip: ""} in network mk-kubernetes-upgrade-235196: {Iface:virbr4 ExpiryTime:2024-02-29 19:27:40 +0000 UTC Type:0 Mac:52:54:00:85:57:33 Iaid: IPaddr:192.168.72.169 Prefix:24 Hostname:kubernetes-upgrade-235196 Clientid:01:52:54:00:85:57:33}
	I0229 18:28:51.459275   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | domain kubernetes-upgrade-235196 has defined IP address 192.168.72.169 and MAC address 52:54:00:85:57:33 in network mk-kubernetes-upgrade-235196
	I0229 18:28:51.459559   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHPort
	I0229 18:28:51.459809   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHKeyPath
	I0229 18:28:51.459967   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .GetSSHUsername
	I0229 18:28:51.460163   42268 sshutil.go:53] new ssh client: &{IP:192.168.72.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/kubernetes-upgrade-235196/id_rsa Username:docker}
	I0229 18:28:51.524485   42268 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:28:51.524572   42268 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:28:51.524845   42268 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:28:51.549473   42268 api_server.go:72] duration metric: took 149.758854ms to wait for apiserver process to appear ...
	I0229 18:28:51.549521   42268 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:28:51.549543   42268 api_server.go:253] Checking apiserver healthz at https://192.168.72.169:8443/healthz ...
	I0229 18:28:51.554324   42268 api_server.go:279] https://192.168.72.169:8443/healthz returned 200:
	ok
	I0229 18:28:51.555507   42268 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:28:51.555534   42268 api_server.go:131] duration metric: took 6.004819ms to wait for apiserver health ...
	I0229 18:28:51.555545   42268 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:28:51.563101   42268 system_pods.go:59] 7 kube-system pods found
	I0229 18:28:51.563142   42268 system_pods.go:61] "coredns-76f75df574-sbvqq" [b53d268d-30e2-4ef4-b989-590ca143572e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:28:51.563152   42268 system_pods.go:61] "etcd-kubernetes-upgrade-235196" [b09bd6f6-d80d-45ea-b13b-87e3ddbdac93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:28:51.563166   42268 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-235196" [e636cefd-7a5f-48fa-8152-4490dcb48887] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:28:51.563178   42268 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-235196" [cb95ec3c-6111-4c29-933a-1821ecdb0384] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:28:51.563189   42268 system_pods.go:61] "kube-proxy-tkwbc" [03f8e45c-3782-42f9-a3c3-0cc6bd7f7e3e] Running
	I0229 18:28:51.563200   42268 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-235196" [45b02b9a-cc89-4c2e-a9c6-992208380d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:28:51.563208   42268 system_pods.go:61] "storage-provisioner" [c35d2178-6a93-4965-9081-8bba8a012556] Running
	I0229 18:28:51.563217   42268 system_pods.go:74] duration metric: took 7.665131ms to wait for pod list to return data ...
	I0229 18:28:51.563233   42268 kubeadm.go:581] duration metric: took 163.523432ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 18:28:51.563251   42268 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:28:51.569241   42268 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:28:51.569268   42268 node_conditions.go:123] node cpu capacity is 2
	I0229 18:28:51.569282   42268 node_conditions.go:105] duration metric: took 6.021244ms to run NodePressure ...
	I0229 18:28:51.569297   42268 start.go:228] waiting for startup goroutines ...
	I0229 18:28:51.575703   42268 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:28:51.575721   42268 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:28:51.575733   42268 cache_images.go:262] succeeded pushing to: kubernetes-upgrade-235196
	I0229 18:28:51.575738   42268 cache_images.go:263] failed pushing to: 
	I0229 18:28:51.575762   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:51.575776   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:51.577736   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Closing plugin on server side
	I0229 18:28:51.577762   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:51.577781   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:51.577797   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:51.577806   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:51.578115   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:51.578130   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:51.600947   42268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:28:51.608152   42268 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:28:52.606130   42268 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.005142142s)
	I0229 18:28:52.606187   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:52.606198   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:52.606209   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:52.606228   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:52.606467   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Closing plugin on server side
	I0229 18:28:52.606501   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Closing plugin on server side
	I0229 18:28:52.606535   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:52.606542   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:52.606557   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:52.606571   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:52.606588   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:52.606597   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:52.606608   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:52.606616   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:52.606828   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:52.606841   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:52.607060   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) DBG | Closing plugin on server side
	I0229 18:28:52.607145   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:52.607203   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:52.614806   42268 main.go:141] libmachine: Making call to close driver server
	I0229 18:28:52.614828   42268 main.go:141] libmachine: (kubernetes-upgrade-235196) Calling .Close
	I0229 18:28:52.615105   42268 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:28:52.615117   42268 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:28:52.617135   42268 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:28:52.618576   42268 addons.go:505] enable addons completed in 1.240420435s: enabled=[storage-provisioner default-storageclass]
	I0229 18:28:52.618618   42268 start.go:233] waiting for cluster config update ...
	I0229 18:28:52.618632   42268 start.go:242] writing updated cluster config ...
	I0229 18:28:52.618881   42268 ssh_runner.go:195] Run: rm -f paused
	I0229 18:28:52.670590   42268 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:28:52.672593   42268 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-235196" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.478346613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.485962310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.486621366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.488657751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.526308735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.526630284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.526724033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.527109002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.594904023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.595105690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.595213636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.595966498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.720572388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.721004977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.721107784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:46 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:46.721450789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:50 kubernetes-upgrade-235196 cri-dockerd[3346]: time="2024-02-29T18:28:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Feb 29 18:28:50 kubernetes-upgrade-235196 dockerd[3136]: time="2024-02-29T18:28:50.803729926Z" level=info msg="ignoring event" container=95eee0dd4ff7c4ea080adc7f82081941edcefc2fa1a5160252b2c3341ebed795 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 18:28:50 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:50.804815461Z" level=info msg="shim disconnected" id=95eee0dd4ff7c4ea080adc7f82081941edcefc2fa1a5160252b2c3341ebed795 namespace=moby
	Feb 29 18:28:50 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:50.805462187Z" level=warning msg="cleaning up after shim disconnected" id=95eee0dd4ff7c4ea080adc7f82081941edcefc2fa1a5160252b2c3341ebed795 namespace=moby
	Feb 29 18:28:50 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:50.805540258Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 18:28:51 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:51.959313080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:28:51 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:51.959589438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:28:51 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:51.959617994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:28:51 kubernetes-upgrade-235196 dockerd[3143]: time="2024-02-29T18:28:51.959799664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a98a3b7a7053f       6e38f40d628db       2 seconds ago       Running             storage-provisioner       1                   442d9f324fcb5       storage-provisioner
	1fb5975eb2534       cbb01a7bd410d       7 seconds ago       Running             coredns                   1                   52a22f95481c1       coredns-76f75df574-sbvqq
	8b4d21ae0be8a       a0eed15eed449       7 seconds ago       Running             etcd                      1                   fe201dcbb02d4       etcd-kubernetes-upgrade-235196
	4206f2c44e654       4270645ed6b7a       7 seconds ago       Running             kube-scheduler            1                   4acaefcd4d7de       kube-scheduler-kubernetes-upgrade-235196
	a2b222745e82b       bbb47a0f83324       7 seconds ago       Running             kube-apiserver            1                   ef7dc4002c4dd       kube-apiserver-kubernetes-upgrade-235196
	c5eed16239785       d4e01cdf63970       7 seconds ago       Running             kube-controller-manager   1                   fe2c5920285e1       kube-controller-manager-kubernetes-upgrade-235196
	95eee0dd4ff7c       6e38f40d628db       8 seconds ago       Exited              storage-provisioner       0                   442d9f324fcb5       storage-provisioner
	7430062a31a95       cc0a4f00aad7b       8 seconds ago       Running             kube-proxy                1                   fa1c6c3f0f534       kube-proxy-tkwbc
	4669a83929d49       cbb01a7bd410d       28 seconds ago      Created             coredns                   0                   e85d7dd23e2da       coredns-76f75df574-sbvqq
	16d03d545c8ff       cc0a4f00aad7b       28 seconds ago      Created             kube-proxy                0                   1bf62f27c9ca3       kube-proxy-tkwbc
	163c82d862d87       a0eed15eed449       49 seconds ago      Exited              etcd                      0                   1f38e52c11b30       etcd-kubernetes-upgrade-235196
	1ab7c2972e5b4       4270645ed6b7a       49 seconds ago      Exited              kube-scheduler            0                   ef33d31f25622       kube-scheduler-kubernetes-upgrade-235196
	4681bff7f4dd9       bbb47a0f83324       49 seconds ago      Exited              kube-apiserver            0                   172e395676760       kube-apiserver-kubernetes-upgrade-235196
	ae06daac8df0c       d4e01cdf63970       49 seconds ago      Exited              kube-controller-manager   0                   ee8ea78ac1917       kube-controller-manager-kubernetes-upgrade-235196
	
	
	==> coredns [1fb5975eb253] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41586 - 47877 "HINFO IN 8643252654831801614.2872232549111579847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023995694s
	
	
	==> coredns [4669a83929d4] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-235196
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-235196
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:28:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-235196
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:28:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:28:50 +0000   Thu, 29 Feb 2024 18:28:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:28:50 +0000   Thu, 29 Feb 2024 18:28:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:28:50 +0000   Thu, 29 Feb 2024 18:28:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:28:50 +0000   Thu, 29 Feb 2024 18:28:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.169
	  Hostname:    kubernetes-upgrade-235196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 206effd6f30e43e186d57bf204feed23
	  System UUID:                206effd6-f30e-43e1-86d5-7bf204feed23
	  Boot ID:                    f28fc585-02ee-419f-85e6-97ff8fb0722c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-sbvqq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29s
	  kube-system                 etcd-kubernetes-upgrade-235196                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         37s
	  kube-system                 kube-apiserver-kubernetes-upgrade-235196             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-235196    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-proxy-tkwbc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  kube-system                 kube-scheduler-kubernetes-upgrade-235196             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node kubernetes-upgrade-235196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node kubernetes-upgrade-235196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node kubernetes-upgrade-235196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-235196 event: Registered Node kubernetes-upgrade-235196 in Controller
	
	
	==> dmesg <==
	[  +0.067323] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +1.286114] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.452813] systemd-fstab-generator[823]: Ignoring "noauto" option for root device
	[  +0.171375] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.221679] systemd-fstab-generator[849]: Ignoring "noauto" option for root device
	[  +1.705380] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +0.178452] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[  +0.181829] systemd-fstab-generator[1034]: Ignoring "noauto" option for root device
	[  +0.207428] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +5.039291] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
	[  +0.072610] kauditd_printk_skb: 348 callbacks suppressed
	[Feb29 18:28] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +1.147336] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.506722] kauditd_printk_skb: 25 callbacks suppressed
	[ +13.910250] systemd-fstab-generator[2530]: Ignoring "noauto" option for root device
	[  +0.435683] systemd-fstab-generator[2563]: Ignoring "noauto" option for root device
	[  +0.250404] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.405829] systemd-fstab-generator[2675]: Ignoring "noauto" option for root device
	[ +10.529888] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.769048] systemd-fstab-generator[3299]: Ignoring "noauto" option for root device
	[  +0.168950] systemd-fstab-generator[3311]: Ignoring "noauto" option for root device
	[  +0.160185] systemd-fstab-generator[3323]: Ignoring "noauto" option for root device
	[  +0.226862] systemd-fstab-generator[3338]: Ignoring "noauto" option for root device
	[  +7.002946] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.005143] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [163c82d862d8] <==
	{"level":"info","ts":"2024-02-29T18:28:06.257344Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"79cd052ad92d4108","local-member-id":"b0efa2e16a3d8d48","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:28:06.279759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:28:06.279836Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:28:15.238813Z","caller":"traceutil/trace.go:171","msg":"trace[388726490] transaction","detail":"{read_only:false; response_revision:235; number_of_response:1; }","duration":"147.728941ms","start":"2024-02-29T18:28:15.091059Z","end":"2024-02-29T18:28:15.238788Z","steps":["trace[388726490] 'process raft request'  (duration: 147.612169ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T18:28:16.73149Z","caller":"traceutil/trace.go:171","msg":"trace[826189041] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"146.346238ms","start":"2024-02-29T18:28:16.585071Z","end":"2024-02-29T18:28:16.731417Z","steps":["trace[826189041] 'process raft request'  (duration: 146.227614ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T18:28:16.735773Z","caller":"traceutil/trace.go:171","msg":"trace[381846260] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"143.17635ms","start":"2024-02-29T18:28:16.592585Z","end":"2024-02-29T18:28:16.735761Z","steps":["trace[381846260] 'process raft request'  (duration: 143.083387ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T18:28:20.874819Z","caller":"traceutil/trace.go:171","msg":"trace[859507824] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"186.821584ms","start":"2024-02-29T18:28:20.687969Z","end":"2024-02-29T18:28:20.87479Z","steps":["trace[859507824] 'process raft request'  (duration: 97.638874ms)","trace[859507824] 'compare'  (duration: 89.049446ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T18:28:21.458519Z","caller":"traceutil/trace.go:171","msg":"trace[1192219619] transaction","detail":"{read_only:false; response_revision:294; number_of_response:1; }","duration":"163.983617ms","start":"2024-02-29T18:28:21.294514Z","end":"2024-02-29T18:28:21.458498Z","steps":["trace[1192219619] 'process raft request'  (duration: 163.687652ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T18:28:21.848405Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.780259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-02-29T18:28:21.848579Z","caller":"traceutil/trace.go:171","msg":"trace[882471415] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:295; }","duration":"114.047348ms","start":"2024-02-29T18:28:21.734513Z","end":"2024-02-29T18:28:21.84856Z","steps":["trace[882471415] 'range keys from in-memory index tree'  (duration: 113.598894ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T18:28:23.857059Z","caller":"traceutil/trace.go:171","msg":"trace[1578074123] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"136.730134ms","start":"2024-02-29T18:28:23.720308Z","end":"2024-02-29T18:28:23.857038Z","steps":["trace[1578074123] 'process raft request'  (duration: 108.455676ms)","trace[1578074123] 'compare'  (duration: 28.177936ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T18:28:23.860989Z","caller":"traceutil/trace.go:171","msg":"trace[823371996] linearizableReadLoop","detail":"{readStateIndex:318; appliedIndex:316; }","duration":"121.961846ms","start":"2024-02-29T18:28:23.73901Z","end":"2024-02-29T18:28:23.860972Z","steps":["trace[823371996] 'read index received'  (duration: 89.762306ms)","trace[823371996] 'applied index is now lower than readState.Index'  (duration: 32.198669ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T18:28:23.861193Z","caller":"traceutil/trace.go:171","msg":"trace[1476963304] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"133.563537ms","start":"2024-02-29T18:28:23.727619Z","end":"2024-02-29T18:28:23.861183Z","steps":["trace[1476963304] 'process raft request'  (duration: 133.218392ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T18:28:23.86158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.580129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2024-02-29T18:28:23.861654Z","caller":"traceutil/trace.go:171","msg":"trace[1262706895] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:311; }","duration":"122.657869ms","start":"2024-02-29T18:28:23.738978Z","end":"2024-02-29T18:28:23.861636Z","steps":["trace[1262706895] 'agreement among raft nodes before linearized reading'  (duration: 122.548285ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T18:28:26.272871Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T18:28:26.272968Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-235196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.169:2380"],"advertise-client-urls":["https://192.168.72.169:2379"]}
	{"level":"warn","ts":"2024-02-29T18:28:26.273056Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:28:26.273319Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:28:26.366703Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.169:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:28:26.3668Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.169:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T18:28:26.366873Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b0efa2e16a3d8d48","current-leader-member-id":"b0efa2e16a3d8d48"}
	{"level":"info","ts":"2024-02-29T18:28:26.371048Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.169:2380"}
	{"level":"info","ts":"2024-02-29T18:28:26.371624Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.169:2380"}
	{"level":"info","ts":"2024-02-29T18:28:26.371647Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-235196","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.169:2380"],"advertise-client-urls":["https://192.168.72.169:2379"]}
	
	
	==> etcd [8b4d21ae0be8] <==
	{"level":"info","ts":"2024-02-29T18:28:47.399053Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:28:47.39907Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:28:47.398989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 switched to configuration voters=(12749588159142923592)"}
	{"level":"info","ts":"2024-02-29T18:28:47.414522Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"79cd052ad92d4108","local-member-id":"b0efa2e16a3d8d48","added-peer-id":"b0efa2e16a3d8d48","added-peer-peer-urls":["https://192.168.72.169:2380"]}
	{"level":"info","ts":"2024-02-29T18:28:47.417442Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"79cd052ad92d4108","local-member-id":"b0efa2e16a3d8d48","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:28:47.417554Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:28:47.456728Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:28:47.457402Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.169:2380"}
	{"level":"info","ts":"2024-02-29T18:28:47.460424Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.169:2380"}
	{"level":"info","ts":"2024-02-29T18:28:47.461379Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b0efa2e16a3d8d48","initial-advertise-peer-urls":["https://192.168.72.169:2380"],"listen-peer-urls":["https://192.168.72.169:2380"],"advertise-client-urls":["https://192.168.72.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:28:47.461708Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:28:48.504513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:28:48.50466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:28:48.504708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 received MsgPreVoteResp from b0efa2e16a3d8d48 at term 2"}
	{"level":"info","ts":"2024-02-29T18:28:48.504735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:28:48.504842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 received MsgVoteResp from b0efa2e16a3d8d48 at term 3"}
	{"level":"info","ts":"2024-02-29T18:28:48.505009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0efa2e16a3d8d48 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:28:48.505039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0efa2e16a3d8d48 elected leader b0efa2e16a3d8d48 at term 3"}
	{"level":"info","ts":"2024-02-29T18:28:48.506735Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b0efa2e16a3d8d48","local-member-attributes":"{Name:kubernetes-upgrade-235196 ClientURLs:[https://192.168.72.169:2379]}","request-path":"/0/members/b0efa2e16a3d8d48/attributes","cluster-id":"79cd052ad92d4108","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:28:48.506783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:28:48.507322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:28:48.509324Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:28:48.50937Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:28:48.511641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.169:2379"}
	{"level":"info","ts":"2024-02-29T18:28:48.513233Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:28:54 up 1 min,  0 users,  load average: 1.75, 0.44, 0.15
	Linux kubernetes-upgrade-235196 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4681bff7f4dd] <==
	W0229 18:28:35.213828       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.254788       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.271094       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.310587       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.348554       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.357852       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.432424       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.464893       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.527736       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.527967       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.697378       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.701046       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.804760       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.904646       1 logging.go:59] [core] [Channel #14 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.923214       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.968169       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:35.995399       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.014233       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.046626       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.123954       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.128892       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.218981       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.238163       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.268834       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:28:36.287900       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a2b222745e82] <==
	I0229 18:28:50.029608       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 18:28:50.035124       1 aggregator.go:163] waiting for initial CRD sync...
	I0229 18:28:50.035148       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0229 18:28:50.035155       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0229 18:28:50.036906       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:28:50.036998       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:28:50.068183       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 18:28:50.068443       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 18:28:50.129131       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 18:28:50.129175       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 18:28:50.144596       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:28:50.168606       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:28:50.169305       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:28:50.170103       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:28:50.171616       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:28:50.172164       1 aggregator.go:165] initial CRD sync complete...
	I0229 18:28:50.172335       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:28:50.172506       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:28:50.172675       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:28:50.180517       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:28:50.181225       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:28:51.030778       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 18:28:51.354435       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.169]
	I0229 18:28:51.356802       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:28:51.363742       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ae06daac8df0] <==
	I0229 18:28:23.713063       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 18:28:23.713225       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0229 18:28:23.713494       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="kubernetes-upgrade-235196"
	I0229 18:28:23.713555       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 18:28:23.713816       1 event.go:376] "Event occurred" object="kubernetes-upgrade-235196" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubernetes-upgrade-235196 event: Registered Node kubernetes-upgrade-235196 in Controller"
	I0229 18:28:23.713909       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 18:28:23.717617       1 shared_informer.go:318] Caches are synced for ephemeral
	I0229 18:28:23.695090       1 shared_informer.go:318] Caches are synced for PV protection
	I0229 18:28:23.695103       1 shared_informer.go:318] Caches are synced for job
	I0229 18:28:23.724614       1 shared_informer.go:318] Caches are synced for service account
	I0229 18:28:23.739649       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0229 18:28:23.762042       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:28:23.778826       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:28:23.790542       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 18:28:23.843077       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 18:28:24.161896       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:28:24.169195       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:28:24.169283       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 18:28:24.366461       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 1"
	I0229 18:28:24.454697       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tkwbc"
	I0229 18:28:24.555771       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-sbvqq"
	I0229 18:28:24.589560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="225.565793ms"
	I0229 18:28:24.633721       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="44.086155ms"
	I0229 18:28:24.684869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="51.096047ms"
	I0229 18:28:24.684990       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.726µs"
	
	
	==> kube-controller-manager [c5eed1623978] <==
	I0229 18:28:52.238776       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0229 18:28:52.238784       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0229 18:28:52.238791       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0229 18:28:52.245751       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0229 18:28:52.245820       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0229 18:28:52.246013       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0229 18:28:52.246042       1 taint_eviction.go:291] "Sending events to api server"
	I0229 18:28:52.246067       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0229 18:28:52.251792       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0229 18:28:52.251987       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0229 18:28:52.252018       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0229 18:28:52.279891       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0229 18:28:52.280065       1 horizontal.go:200] "Starting HPA controller"
	I0229 18:28:52.280107       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0229 18:28:52.306762       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0229 18:28:52.307132       1 disruption.go:433] "Sending events to api server."
	I0229 18:28:52.307992       1 disruption.go:444] "Starting disruption controller"
	I0229 18:28:52.308176       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0229 18:28:52.316606       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0229 18:28:52.316656       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0229 18:28:52.316991       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0229 18:28:52.317023       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0229 18:28:52.322207       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0229 18:28:52.322833       1 job_controller.go:224] "Starting job controller"
	I0229 18:28:52.323220       1 shared_informer.go:311] Waiting for caches to sync for job
	
	
	==> kube-proxy [16d03d545c8f] <==
	
	
	==> kube-proxy [7430062a31a9] <==
	I0229 18:28:48.133798       1 server_others.go:72] "Using iptables proxy"
	I0229 18:28:50.193216       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.72.169"]
	I0229 18:28:50.401801       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 18:28:50.401985       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:28:50.402153       1 server_others.go:168] "Using iptables Proxier"
	I0229 18:28:50.416577       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:28:50.417635       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 18:28:50.417907       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:28:50.432823       1 config.go:188] "Starting service config controller"
	I0229 18:28:50.433468       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:28:50.433809       1 config.go:315] "Starting node config controller"
	I0229 18:28:50.433928       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:28:50.439337       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:28:50.439501       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:28:50.533739       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:28:50.534923       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:28:50.540512       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1ab7c2972e5b] <==
	W0229 18:28:09.135366       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 18:28:09.135480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 18:28:09.153996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 18:28:09.154354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 18:28:09.176605       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 18:28:09.176945       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:28:09.209532       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 18:28:09.209633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 18:28:09.234509       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:28:09.234575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 18:28:09.320827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 18:28:09.320854       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 18:28:09.345132       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 18:28:09.345184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 18:28:09.385146       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 18:28:09.385212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 18:28:09.390073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:28:09.390150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:28:09.532941       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 18:28:09.533010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0229 18:28:11.528001       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:28:26.307964       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 18:28:26.309635       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 18:28:26.309815       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 18:28:26.310125       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4206f2c44e65] <==
	I0229 18:28:48.225415       1 serving.go:380] Generated self-signed cert in-memory
	W0229 18:28:50.099680       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 18:28:50.109549       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:28:50.109992       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 18:28:50.110222       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 18:28:50.197942       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 18:28:50.198776       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:28:50.202435       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 18:28:50.202496       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:28:50.204750       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 18:28:50.205765       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 18:28:50.303940       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:28:43 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:43.478964    1530 status_manager.go:853] "Failed to get status for pod" podUID="175e45786eccc36ac66c0276ab2c4696" pod="kube-system/kube-apiserver-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.224724    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee8ea78ac191788f737ab5b5eb2e8e7d6f8e9ab8acd1332361ce709f65ad9451"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.224895    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="172e395676760e7fd4f953a4524b1682d4e5ace878ea53c95f26fdeb171856f2"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.224983    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a4932b81e1aa16aa74e034077bf767f63d62dcb9d17091ff21d7246d18ee0f"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.224995    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf62f27c9ca36b4db04f06d06526a639669a6d4b76db5192e150c2df005cee8"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.225006    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85d7dd23e2daae09266e41a758e47bea6b1730d22b0e3fa8606156f51d5874c"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.225082    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef33d31f2562289113739c18f75e7390132bacd67b843208cd0140cd11cc111b"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.225100    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f38e52c11b300265309547a9561b49b8ef4d1df0c5411d127091925e10a5659"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.228021    1530 status_manager.go:853] "Failed to get status for pod" podUID="b53d268d-30e2-4ef4-b989-590ca143572e" pod="kube-system/coredns-76f75df574-sbvqq" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sbvqq\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.228811    1530 status_manager.go:853] "Failed to get status for pod" podUID="d92f46b49b7a2f6e6fd32a37908f5c62" pod="kube-system/etcd-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.229478    1530 status_manager.go:853] "Failed to get status for pod" podUID="175e45786eccc36ac66c0276ab2c4696" pod="kube-system/kube-apiserver-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.230040    1530 status_manager.go:853] "Failed to get status for pod" podUID="308d61bab76db7c6b96285655c3c70cc" pod="kube-system/kube-controller-manager-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.230703    1530 status_manager.go:853] "Failed to get status for pod" podUID="03f8e45c-3782-42f9-a3c3-0cc6bd7f7e3e" pod="kube-system/kube-proxy-tkwbc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkwbc\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.231347    1530 status_manager.go:853] "Failed to get status for pod" podUID="175e45786eccc36ac66c0276ab2c4696" pod="kube-system/kube-apiserver-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.232058    1530 status_manager.go:853] "Failed to get status for pod" podUID="308d61bab76db7c6b96285655c3c70cc" pod="kube-system/kube-controller-manager-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.232694    1530 status_manager.go:853] "Failed to get status for pod" podUID="f4b46ed0c4aaa45edfd2bad1ad8ffc12" pod="kube-system/kube-scheduler-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.233723    1530 status_manager.go:853] "Failed to get status for pod" podUID="03f8e45c-3782-42f9-a3c3-0cc6bd7f7e3e" pod="kube-system/kube-proxy-tkwbc" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tkwbc\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.235164    1530 status_manager.go:853] "Failed to get status for pod" podUID="b53d268d-30e2-4ef4-b989-590ca143572e" pod="kube-system/coredns-76f75df574-sbvqq" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sbvqq\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:45 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:45.235976    1530 status_manager.go:853] "Failed to get status for pod" podUID="d92f46b49b7a2f6e6fd32a37908f5c62" pod="kube-system/etcd-kubernetes-upgrade-235196" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-235196\": dial tcp 192.168.72.169:8443: connect: connection refused"
	Feb 29 18:28:46 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:46.169408    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4acaefcd4d7debad824dc4c8e2a3bb7c211d25bbddfb207599f132d46f033b42"
	Feb 29 18:28:47 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:47.010964    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe201dcbb02d4a4aa91172134aba1994c3d7fea684d9be530e5fd9387c9cc846"
	Feb 29 18:28:47 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:47.619042    1530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a22f95481c1a5fd1ce6c50401f761cae42f92ad0b504879a1e116133baeb38"
	Feb 29 18:28:50 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:50.073909    1530 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 29 18:28:50 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:50.074797    1530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 29 18:28:51 kubernetes-upgrade-235196 kubelet[1530]: I0229 18:28:51.820021    1530 scope.go:117] "RemoveContainer" containerID="95eee0dd4ff7c4ea080adc7f82081941edcefc2fa1a5160252b2c3341ebed795"
	
	
	==> storage-provisioner [95eee0dd4ff7] <==
	I0229 18:28:47.689515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0229 18:28:50.769870       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a98a3b7a7053] <==
	I0229 18:28:52.111442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 18:28:52.221095       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 18:28:52.221478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 18:28:52.277882       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 18:28:52.278613       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2eda2164-0dbe-42f1-b1a8-5ce308c58217", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-235196_133b829e-6d1b-400f-a9c6-a8a4a84acf5f became leader
	I0229 18:28:52.285646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-235196_133b829e-6d1b-400f-a9c6-a8a4a84acf5f!
	I0229 18:28:52.386557       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-235196_133b829e-6d1b-400f-a9c6-a8a4a84acf5f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-235196 -n kubernetes-upgrade-235196
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-235196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-235196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-235196
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-235196: (1.16437304s)
--- FAIL: TestKubernetesUpgrade (416.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: exit status 109 (4m43.126626637s)

                                                
                                                
-- stdout --
	* [old-k8s-version-467811] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node old-k8s-version-467811 in cluster old-k8s-version-467811
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:33:53.392459   53990 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:33:53.392615   53990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:33:53.392625   53990 out.go:304] Setting ErrFile to fd 2...
	I0229 18:33:53.392632   53990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:33:53.392822   53990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:33:53.393412   53990 out.go:298] Setting JSON to false
	I0229 18:33:53.394488   53990 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4584,"bootTime":1709227050,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:33:53.394549   53990 start.go:139] virtualization: kvm guest
	I0229 18:33:53.396762   53990 out.go:177] * [old-k8s-version-467811] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:33:53.398700   53990 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:33:53.398708   53990 notify.go:220] Checking for updates...
	I0229 18:33:53.400208   53990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:33:53.401939   53990 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:33:53.403486   53990 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:33:53.404980   53990 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:33:53.406435   53990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:33:53.408312   53990 config.go:182] Loaded profile config "bridge-911469": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:33:53.408424   53990 config.go:182] Loaded profile config "flannel-911469": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:33:53.408524   53990 config.go:182] Loaded profile config "kubenet-911469": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:33:53.408637   53990 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:33:53.450973   53990 out.go:177] * Using the kvm2 driver based on user configuration
	I0229 18:33:53.452370   53990 start.go:299] selected driver: kvm2
	I0229 18:33:53.452389   53990 start.go:903] validating driver "kvm2" against <nil>
	I0229 18:33:53.452403   53990 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:33:53.453254   53990 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:33:53.453343   53990 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:33:53.470937   53990 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:33:53.470988   53990 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:33:53.471251   53990 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:33:53.471331   53990 cni.go:84] Creating CNI manager for ""
	I0229 18:33:53.471350   53990 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:33:53.471357   53990 start_flags.go:323] config:
	{Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:33:53.471530   53990 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:33:53.473561   53990 out.go:177] * Starting control plane node old-k8s-version-467811 in cluster old-k8s-version-467811
	I0229 18:33:53.474870   53990 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:33:53.474901   53990 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 18:33:53.474911   53990 cache.go:56] Caching tarball of preloaded images
	I0229 18:33:53.474978   53990 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:33:53.474988   53990 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 18:33:53.475065   53990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/config.json ...
	I0229 18:33:53.475090   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/config.json: {Name:mk283066d85f79a359f644dcae05400ad0df7ede Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:33:53.475212   53990 start.go:365] acquiring machines lock for old-k8s-version-467811: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:33:59.961190   53990 start.go:369] acquired machines lock for "old-k8s-version-467811" in 6.485955014s
	I0229 18:33:59.961253   53990 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:33:59.961368   53990 start.go:125] createHost starting for "" (driver="kvm2")
	I0229 18:33:59.963518   53990 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:33:59.963826   53990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:33:59.963879   53990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:33:59.981880   53990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0229 18:33:59.982281   53990 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:33:59.982759   53990 main.go:141] libmachine: Using API Version  1
	I0229 18:33:59.982782   53990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:33:59.983230   53990 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:33:59.983413   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:33:59.983575   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:33:59.983745   53990 start.go:159] libmachine.API.Create for "old-k8s-version-467811" (driver="kvm2")
	I0229 18:33:59.983785   53990 client.go:168] LocalClient.Create starting
	I0229 18:33:59.983813   53990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem
	I0229 18:33:59.983845   53990 main.go:141] libmachine: Decoding PEM data...
	I0229 18:33:59.983862   53990 main.go:141] libmachine: Parsing certificate...
	I0229 18:33:59.983906   53990 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem
	I0229 18:33:59.983920   53990 main.go:141] libmachine: Decoding PEM data...
	I0229 18:33:59.983928   53990 main.go:141] libmachine: Parsing certificate...
	I0229 18:33:59.983940   53990 main.go:141] libmachine: Running pre-create checks...
	I0229 18:33:59.983950   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .PreCreateCheck
	I0229 18:33:59.984322   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetConfigRaw
	I0229 18:33:59.984729   53990 main.go:141] libmachine: Creating machine...
	I0229 18:33:59.984744   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .Create
	I0229 18:33:59.984866   53990 main.go:141] libmachine: (old-k8s-version-467811) Creating KVM machine...
	I0229 18:33:59.986026   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found existing default KVM network
	I0229 18:33:59.987715   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:33:59.987400   54417 network.go:212] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a0:2e:d7} reservation:<nil>}
	I0229 18:33:59.988617   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:33:59.988400   54417 network.go:212] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a6:dc:b2} reservation:<nil>}
	I0229 18:33:59.989497   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:33:59.989375   54417 network.go:212] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:46:27:a0} reservation:<nil>}
	I0229 18:33:59.990578   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:33:59.990499   54417 network.go:207] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002b36c0}
	I0229 18:33:59.997685   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | trying to create private KVM network mk-old-k8s-version-467811 192.168.72.0/24...
	I0229 18:34:00.074012   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting up store path in /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811 ...
	I0229 18:34:00.074058   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | private KVM network mk-old-k8s-version-467811 192.168.72.0/24 created
	I0229 18:34:00.074076   53990 main.go:141] libmachine: (old-k8s-version-467811) Building disk image from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 18:34:00.074096   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:00.073960   54417 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:34:00.074147   53990 main.go:141] libmachine: (old-k8s-version-467811) Downloading /home/jenkins/minikube-integration/18259-6402/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:34:00.326466   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:00.326348   54417 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa...
	I0229 18:34:00.468197   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:00.468069   54417 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/old-k8s-version-467811.rawdisk...
	I0229 18:34:00.468230   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Writing magic tar header
	I0229 18:34:00.468249   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Writing SSH key tar header
	I0229 18:34:00.468262   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:00.468232   54417 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811 ...
	I0229 18:34:00.468478   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811 (perms=drwx------)
	I0229 18:34:00.468509   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines (perms=drwxr-xr-x)
	I0229 18:34:00.468523   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811
	I0229 18:34:00.468539   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines
	I0229 18:34:00.468551   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:34:00.468564   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402
	I0229 18:34:00.468573   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0229 18:34:00.468584   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home/jenkins
	I0229 18:34:00.468593   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Checking permissions on dir: /home
	I0229 18:34:00.468609   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Skipping /home - not owner
	I0229 18:34:00.468631   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube (perms=drwxr-xr-x)
	I0229 18:34:00.468643   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402 (perms=drwxrwxr-x)
	I0229 18:34:00.468658   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0229 18:34:00.468667   53990 main.go:141] libmachine: (old-k8s-version-467811) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0229 18:34:00.468678   53990 main.go:141] libmachine: (old-k8s-version-467811) Creating domain...
	I0229 18:34:00.470704   53990 main.go:141] libmachine: (old-k8s-version-467811) define libvirt domain using xml: 
	I0229 18:34:00.470721   53990 main.go:141] libmachine: (old-k8s-version-467811) <domain type='kvm'>
	I0229 18:34:00.470732   53990 main.go:141] libmachine: (old-k8s-version-467811)   <name>old-k8s-version-467811</name>
	I0229 18:34:00.470740   53990 main.go:141] libmachine: (old-k8s-version-467811)   <memory unit='MiB'>2200</memory>
	I0229 18:34:00.470748   53990 main.go:141] libmachine: (old-k8s-version-467811)   <vcpu>2</vcpu>
	I0229 18:34:00.470755   53990 main.go:141] libmachine: (old-k8s-version-467811)   <features>
	I0229 18:34:00.470763   53990 main.go:141] libmachine: (old-k8s-version-467811)     <acpi/>
	I0229 18:34:00.470771   53990 main.go:141] libmachine: (old-k8s-version-467811)     <apic/>
	I0229 18:34:00.470779   53990 main.go:141] libmachine: (old-k8s-version-467811)     <pae/>
	I0229 18:34:00.470786   53990 main.go:141] libmachine: (old-k8s-version-467811)     
	I0229 18:34:00.470794   53990 main.go:141] libmachine: (old-k8s-version-467811)   </features>
	I0229 18:34:00.470801   53990 main.go:141] libmachine: (old-k8s-version-467811)   <cpu mode='host-passthrough'>
	I0229 18:34:00.470808   53990 main.go:141] libmachine: (old-k8s-version-467811)   
	I0229 18:34:00.470825   53990 main.go:141] libmachine: (old-k8s-version-467811)   </cpu>
	I0229 18:34:00.470833   53990 main.go:141] libmachine: (old-k8s-version-467811)   <os>
	I0229 18:34:00.470843   53990 main.go:141] libmachine: (old-k8s-version-467811)     <type>hvm</type>
	I0229 18:34:00.470851   53990 main.go:141] libmachine: (old-k8s-version-467811)     <boot dev='cdrom'/>
	I0229 18:34:00.470863   53990 main.go:141] libmachine: (old-k8s-version-467811)     <boot dev='hd'/>
	I0229 18:34:00.470872   53990 main.go:141] libmachine: (old-k8s-version-467811)     <bootmenu enable='no'/>
	I0229 18:34:00.470886   53990 main.go:141] libmachine: (old-k8s-version-467811)   </os>
	I0229 18:34:00.470894   53990 main.go:141] libmachine: (old-k8s-version-467811)   <devices>
	I0229 18:34:00.470902   53990 main.go:141] libmachine: (old-k8s-version-467811)     <disk type='file' device='cdrom'>
	I0229 18:34:00.470916   53990 main.go:141] libmachine: (old-k8s-version-467811)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/boot2docker.iso'/>
	I0229 18:34:00.470924   53990 main.go:141] libmachine: (old-k8s-version-467811)       <target dev='hdc' bus='scsi'/>
	I0229 18:34:00.470955   53990 main.go:141] libmachine: (old-k8s-version-467811)       <readonly/>
	I0229 18:34:00.470977   53990 main.go:141] libmachine: (old-k8s-version-467811)     </disk>
	I0229 18:34:00.470988   53990 main.go:141] libmachine: (old-k8s-version-467811)     <disk type='file' device='disk'>
	I0229 18:34:00.471002   53990 main.go:141] libmachine: (old-k8s-version-467811)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0229 18:34:00.471018   53990 main.go:141] libmachine: (old-k8s-version-467811)       <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/old-k8s-version-467811.rawdisk'/>
	I0229 18:34:00.471027   53990 main.go:141] libmachine: (old-k8s-version-467811)       <target dev='hda' bus='virtio'/>
	I0229 18:34:00.471036   53990 main.go:141] libmachine: (old-k8s-version-467811)     </disk>
	I0229 18:34:00.471057   53990 main.go:141] libmachine: (old-k8s-version-467811)     <interface type='network'>
	I0229 18:34:00.471068   53990 main.go:141] libmachine: (old-k8s-version-467811)       <source network='mk-old-k8s-version-467811'/>
	I0229 18:34:00.471080   53990 main.go:141] libmachine: (old-k8s-version-467811)       <model type='virtio'/>
	I0229 18:34:00.471089   53990 main.go:141] libmachine: (old-k8s-version-467811)     </interface>
	I0229 18:34:00.471097   53990 main.go:141] libmachine: (old-k8s-version-467811)     <interface type='network'>
	I0229 18:34:00.471107   53990 main.go:141] libmachine: (old-k8s-version-467811)       <source network='default'/>
	I0229 18:34:00.471115   53990 main.go:141] libmachine: (old-k8s-version-467811)       <model type='virtio'/>
	I0229 18:34:00.471125   53990 main.go:141] libmachine: (old-k8s-version-467811)     </interface>
	I0229 18:34:00.471132   53990 main.go:141] libmachine: (old-k8s-version-467811)     <serial type='pty'>
	I0229 18:34:00.471140   53990 main.go:141] libmachine: (old-k8s-version-467811)       <target port='0'/>
	I0229 18:34:00.471155   53990 main.go:141] libmachine: (old-k8s-version-467811)     </serial>
	I0229 18:34:00.471163   53990 main.go:141] libmachine: (old-k8s-version-467811)     <console type='pty'>
	I0229 18:34:00.471172   53990 main.go:141] libmachine: (old-k8s-version-467811)       <target type='serial' port='0'/>
	I0229 18:34:00.471179   53990 main.go:141] libmachine: (old-k8s-version-467811)     </console>
	I0229 18:34:00.471187   53990 main.go:141] libmachine: (old-k8s-version-467811)     <rng model='virtio'>
	I0229 18:34:00.471196   53990 main.go:141] libmachine: (old-k8s-version-467811)       <backend model='random'>/dev/random</backend>
	I0229 18:34:00.471204   53990 main.go:141] libmachine: (old-k8s-version-467811)     </rng>
	I0229 18:34:00.471210   53990 main.go:141] libmachine: (old-k8s-version-467811)     
	I0229 18:34:00.471217   53990 main.go:141] libmachine: (old-k8s-version-467811)     
	I0229 18:34:00.471224   53990 main.go:141] libmachine: (old-k8s-version-467811)   </devices>
	I0229 18:34:00.471233   53990 main.go:141] libmachine: (old-k8s-version-467811) </domain>
	I0229 18:34:00.471239   53990 main.go:141] libmachine: (old-k8s-version-467811) 
	I0229 18:34:00.475831   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:69:4b:63 in network default
	I0229 18:34:00.476561   53990 main.go:141] libmachine: (old-k8s-version-467811) Ensuring networks are active...
	I0229 18:34:00.476584   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:00.477266   53990 main.go:141] libmachine: (old-k8s-version-467811) Ensuring network default is active
	I0229 18:34:00.477735   53990 main.go:141] libmachine: (old-k8s-version-467811) Ensuring network mk-old-k8s-version-467811 is active
	I0229 18:34:00.478350   53990 main.go:141] libmachine: (old-k8s-version-467811) Getting domain xml...
	I0229 18:34:00.479071   53990 main.go:141] libmachine: (old-k8s-version-467811) Creating domain...
	I0229 18:34:02.027767   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:02.027799   53990 main.go:141] libmachine: (old-k8s-version-467811) Waiting to get IP...
	I0229 18:34:02.027811   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:02.027827   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:02.026985   54417 retry.go:31] will retry after 233.177492ms: waiting for machine to come up
	I0229 18:34:02.261574   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:02.262204   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:02.262234   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:02.262161   54417 retry.go:31] will retry after 323.693072ms: waiting for machine to come up
	I0229 18:34:02.587782   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:02.588479   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:02.588504   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:02.588394   54417 retry.go:31] will retry after 296.759826ms: waiting for machine to come up
	I0229 18:34:02.886883   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:02.887476   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:02.887494   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:02.887427   54417 retry.go:31] will retry after 473.231873ms: waiting for machine to come up
	I0229 18:34:03.361950   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:03.362514   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:03.362534   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:03.362433   54417 retry.go:31] will retry after 570.868856ms: waiting for machine to come up
	I0229 18:34:03.935216   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:03.936161   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:03.936188   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:03.936085   54417 retry.go:31] will retry after 609.938031ms: waiting for machine to come up
	I0229 18:34:04.547288   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:04.547866   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:04.547896   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:04.547813   54417 retry.go:31] will retry after 802.481749ms: waiting for machine to come up
	I0229 18:34:05.351433   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:05.352034   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:05.352056   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:05.351954   54417 retry.go:31] will retry after 1.445057355s: waiting for machine to come up
	I0229 18:34:06.798437   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:06.799001   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:06.799026   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:06.798949   54417 retry.go:31] will retry after 1.310293753s: waiting for machine to come up
	I0229 18:34:08.111551   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:08.112150   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:08.112183   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:08.112112   54417 retry.go:31] will retry after 1.71169369s: waiting for machine to come up
	I0229 18:34:10.081748   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:10.082353   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:10.082377   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:10.082304   54417 retry.go:31] will retry after 1.872855982s: waiting for machine to come up
	I0229 18:34:11.957321   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:11.957885   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:11.957917   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:11.957832   54417 retry.go:31] will retry after 2.988421456s: waiting for machine to come up
	I0229 18:34:14.948772   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:14.949277   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:14.949304   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:14.949242   54417 retry.go:31] will retry after 3.511636137s: waiting for machine to come up
	I0229 18:34:18.462278   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:18.462871   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:34:18.462900   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:34:18.462824   54417 retry.go:31] will retry after 4.999228419s: waiting for machine to come up
	I0229 18:34:23.465618   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:23.466261   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has current primary IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:23.466293   53990 main.go:141] libmachine: (old-k8s-version-467811) Found IP for machine: 192.168.72.30
	I0229 18:34:23.466307   53990 main.go:141] libmachine: (old-k8s-version-467811) Reserving static IP address...
	I0229 18:34:23.466659   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-467811", mac: "52:54:00:44:95:0e", ip: "192.168.72.30"} in network mk-old-k8s-version-467811
	I0229 18:34:23.547827   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Getting to WaitForSSH function...
	I0229 18:34:23.547855   53990 main.go:141] libmachine: (old-k8s-version-467811) Reserved static IP address: 192.168.72.30
	I0229 18:34:23.547869   53990 main.go:141] libmachine: (old-k8s-version-467811) Waiting for SSH to be available...
	I0229 18:34:23.550867   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:23.551335   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811
	I0229 18:34:23.551363   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find defined IP address of network mk-old-k8s-version-467811 interface with MAC address 52:54:00:44:95:0e
	I0229 18:34:23.551526   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH client type: external
	I0229 18:34:23.551571   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa (-rw-------)
	I0229 18:34:23.551609   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:34:23.551694   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | About to run SSH command:
	I0229 18:34:23.551716   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | exit 0
	I0229 18:34:23.555395   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | SSH cmd err, output: exit status 255: 
	I0229 18:34:23.555418   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0229 18:34:23.555429   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | command : exit 0
	I0229 18:34:23.555442   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | err     : exit status 255
	I0229 18:34:23.555457   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | output  : 
	I0229 18:34:26.556794   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Getting to WaitForSSH function...
	I0229 18:34:26.559465   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.559886   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:26.559918   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.560059   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH client type: external
	I0229 18:34:26.560090   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa (-rw-------)
	I0229 18:34:26.560117   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:34:26.560129   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | About to run SSH command:
	I0229 18:34:26.560145   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | exit 0
	I0229 18:34:26.683859   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | SSH cmd err, output: <nil>: 
	I0229 18:34:26.684103   53990 main.go:141] libmachine: (old-k8s-version-467811) KVM machine creation complete!
	I0229 18:34:26.684434   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetConfigRaw
	I0229 18:34:26.684965   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:26.685184   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:26.685332   53990 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0229 18:34:26.685349   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetState
	I0229 18:34:26.686560   53990 main.go:141] libmachine: Detecting operating system of created instance...
	I0229 18:34:26.686574   53990 main.go:141] libmachine: Waiting for SSH to be available...
	I0229 18:34:26.686580   53990 main.go:141] libmachine: Getting to WaitForSSH function...
	I0229 18:34:26.686586   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:26.689147   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.689550   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:26.689574   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.689755   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:26.689960   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.690140   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.690298   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:26.690442   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:26.690682   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:26.690697   53990 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0229 18:34:26.795716   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:34:26.795744   53990 main.go:141] libmachine: Detecting the provisioner...
	I0229 18:34:26.795755   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:26.798785   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.799155   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:26.799189   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.799347   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:26.799585   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.799806   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.799993   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:26.800198   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:26.800374   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:26.800386   53990 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0229 18:34:26.908919   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0229 18:34:26.909017   53990 main.go:141] libmachine: found compatible host: buildroot
	I0229 18:34:26.909030   53990 main.go:141] libmachine: Provisioning with buildroot...
	I0229 18:34:26.909038   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:34:26.909301   53990 buildroot.go:166] provisioning hostname "old-k8s-version-467811"
	I0229 18:34:26.909332   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:34:26.909506   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:26.912193   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.912622   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:26.912655   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:26.912736   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:26.912934   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.913116   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:26.913277   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:26.913439   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:26.913621   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:26.913635   53990 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467811 && echo "old-k8s-version-467811" | sudo tee /etc/hostname
	I0229 18:34:27.034868   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467811
	
	I0229 18:34:27.034900   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.037607   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.037944   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.037971   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.038195   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:27.038397   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.038580   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.038758   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:27.038946   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:27.039152   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:27.039179   53990 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467811/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:34:27.154069   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:34:27.154103   53990 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:34:27.154124   53990 buildroot.go:174] setting up certificates
	I0229 18:34:27.154136   53990 provision.go:83] configureAuth start
	I0229 18:34:27.154149   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:34:27.154487   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:34:27.157195   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.157565   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.157607   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.157782   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.159967   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.160339   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.160369   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.160498   53990 provision.go:138] copyHostCerts
	I0229 18:34:27.160553   53990 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:34:27.160568   53990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:34:27.160621   53990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:34:27.160695   53990 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:34:27.160702   53990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:34:27.160722   53990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:34:27.160767   53990 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:34:27.160774   53990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:34:27.160791   53990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:34:27.160851   53990 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467811 san=[192.168.72.30 192.168.72.30 localhost 127.0.0.1 minikube old-k8s-version-467811]
	I0229 18:34:27.392728   53990 provision.go:172] copyRemoteCerts
	I0229 18:34:27.392781   53990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:34:27.392811   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.395688   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.396063   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.396093   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.396234   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:27.396417   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.396584   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:27.396718   53990 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:34:27.482614   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:34:27.508778   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:34:27.535415   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:34:27.561421   53990 provision.go:86] duration metric: configureAuth took 407.271231ms
	I0229 18:34:27.561451   53990 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:34:27.561618   53990 config.go:182] Loaded profile config "old-k8s-version-467811": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:34:27.561640   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:27.561921   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.564690   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.565105   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.565135   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.565354   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:27.565550   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.565729   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.565908   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:27.566075   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:27.566268   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:27.566285   53990 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:34:27.677428   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:34:27.677460   53990 buildroot.go:70] root file system type: tmpfs
	I0229 18:34:27.677596   53990 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:34:27.677623   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.680394   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.680771   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.680797   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.681025   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:27.681238   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.681410   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.681543   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:27.681703   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:27.681868   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:27.681929   53990 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:34:27.802621   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:34:27.802660   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:27.805623   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.806088   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:27.806115   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:27.806320   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:27.806524   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.806684   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:27.806851   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:27.807030   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:27.807186   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:27.807203   53990 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:34:28.725011   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:34:28.725054   53990 main.go:141] libmachine: Checking connection to Docker...
	I0229 18:34:28.725068   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetURL
	I0229 18:34:28.726445   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using libvirt version 6000000
	I0229 18:34:28.729267   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.729713   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.729742   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.729921   53990 main.go:141] libmachine: Docker is up and running!
	I0229 18:34:28.729937   53990 main.go:141] libmachine: Reticulating splines...
	I0229 18:34:28.729945   53990 client.go:171] LocalClient.Create took 28.746149655s
	I0229 18:34:28.729970   53990 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-467811" took 28.746227721s
	I0229 18:34:28.729982   53990 start.go:300] post-start starting for "old-k8s-version-467811" (driver="kvm2")
	I0229 18:34:28.729997   53990 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:34:28.730021   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:28.730285   53990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:34:28.730339   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:28.732971   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.733294   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.733328   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.733488   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:28.733651   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:28.733834   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:28.733983   53990 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:34:28.822829   53990 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:34:28.827135   53990 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:34:28.827158   53990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:34:28.827223   53990 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:34:28.827314   53990 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:34:28.827427   53990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:34:28.837626   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:34:28.863118   53990 start.go:303] post-start completed in 133.119742ms
	I0229 18:34:28.863175   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetConfigRaw
	I0229 18:34:28.863772   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:34:28.866291   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.866700   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.866740   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.866947   53990 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/config.json ...
	I0229 18:34:28.867132   53990 start.go:128] duration metric: createHost completed in 28.905752377s
	I0229 18:34:28.867171   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:28.869437   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.869775   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.869807   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.869958   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:28.870166   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:28.870340   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:28.870507   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:28.870666   53990 main.go:141] libmachine: Using SSH client type: native
	I0229 18:34:28.870886   53990 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:34:28.870901   53990 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:34:28.981844   53990 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709231668.969463493
	
	I0229 18:34:28.981872   53990 fix.go:206] guest clock: 1709231668.969463493
	I0229 18:34:28.981882   53990 fix.go:219] Guest: 2024-02-29 18:34:28.969463493 +0000 UTC Remote: 2024-02-29 18:34:28.867144245 +0000 UTC m=+35.528531857 (delta=102.319248ms)
	I0229 18:34:28.981921   53990 fix.go:190] guest clock delta is within tolerance: 102.319248ms
	I0229 18:34:28.981934   53990 start.go:83] releasing machines lock for "old-k8s-version-467811", held for 29.02071355s
	I0229 18:34:28.981963   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:28.982260   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:34:28.985654   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.986022   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.986044   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.986239   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:28.986805   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:28.986985   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:34:28.987080   53990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:34:28.987134   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:28.987391   53990 ssh_runner.go:195] Run: cat /version.json
	I0229 18:34:28.987418   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:34:28.990270   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.991522   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:28.991529   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.991562   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.991586   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.991839   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:28.991934   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:28.991955   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:28.992059   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:28.992171   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:34:28.992243   53990 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:34:28.992498   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:34:28.992660   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:34:28.992820   53990 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:34:29.077406   53990 ssh_runner.go:195] Run: systemctl --version
	I0229 18:34:29.107166   53990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:34:29.114413   53990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:34:29.114495   53990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:34:29.128723   53990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:34:29.155012   53990 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:34:29.155050   53990 start.go:475] detecting cgroup driver to use...
	I0229 18:34:29.155194   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:34:29.189681   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:34:29.208262   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:34:29.227551   53990 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:34:29.227748   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:34:29.239936   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:34:29.252386   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:34:29.270039   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:34:29.285216   53990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:34:29.299942   53990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:34:29.311956   53990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:34:29.322487   53990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:34:29.333087   53990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:34:29.481542   53990 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:34:29.509290   53990 start.go:475] detecting cgroup driver to use...
	I0229 18:34:29.509395   53990 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:34:29.528220   53990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:34:29.551338   53990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:34:29.572812   53990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:34:29.589572   53990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:34:29.604441   53990 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:34:29.633296   53990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:34:29.650860   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:34:29.674636   53990 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:34:29.679665   53990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:34:29.691038   53990 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:34:29.711155   53990 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:34:29.853462   53990 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:34:29.995486   53990 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:34:29.995701   53990 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:34:30.017180   53990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:34:30.157763   53990 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:34:31.640649   53990 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.482771761s)
	I0229 18:34:31.640812   53990 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:34:31.672659   53990 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:34:31.709597   53990 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 18:34:31.709656   53990 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:34:31.712876   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:31.713338   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:34:16 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:34:31.713363   53990 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:34:31.713698   53990 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:34:31.719107   53990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:34:31.736384   53990 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:34:31.736464   53990 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:34:31.756440   53990 docker.go:685] Got preloaded images: 
	I0229 18:34:31.756470   53990 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:34:31.756522   53990 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:34:31.768429   53990 ssh_runner.go:195] Run: which lz4
	I0229 18:34:31.773425   53990 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:34:31.779081   53990 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:34:31.779121   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 18:34:33.339142   53990 docker.go:649] Took 1.565750 seconds to copy over tarball
	I0229 18:34:33.339230   53990 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:34:35.830277   53990 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491017242s)
	I0229 18:34:35.830306   53990 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:34:35.867468   53990 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:34:35.878987   53990 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 18:34:35.903390   53990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:34:36.025556   53990 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:34:37.682270   53990 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.656673513s)
	I0229 18:34:37.682380   53990 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:34:37.714918   53990 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:34:37.714936   53990 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:34:37.714945   53990 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:34:37.717268   53990 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:34:37.717380   53990 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:34:37.717402   53990 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:34:37.717264   53990 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:34:37.717454   53990 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:34:37.717648   53990 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:34:37.717657   53990 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:34:37.717685   53990 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:34:37.718860   53990 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:34:37.719001   53990 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:34:37.719084   53990 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:34:37.719142   53990 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:34:37.719172   53990 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:34:37.719300   53990 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:34:37.719072   53990 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:34:37.719391   53990 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:34:37.851096   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:34:37.852836   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:34:37.859293   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:34:37.859453   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:34:37.868936   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:34:37.887966   53990 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:34:37.888040   53990 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:34:37.888107   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:34:37.892888   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:34:37.894833   53990 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:34:37.894881   53990 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:34:37.894937   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:34:37.903178   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:34:37.957084   53990 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:34:37.957152   53990 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:34:37.957202   53990 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 18:34:37.957215   53990 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:34:37.957259   53990 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:34:37.957279   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 18:34:37.957308   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 18:34:37.957291   53990 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:34:37.957455   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:34:37.991689   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:34:38.000485   53990 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:34:38.000541   53990 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:34:38.000593   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:34:38.000727   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:34:38.000793   53990 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:34:38.000820   53990 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:34:38.000859   53990 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:34:38.042910   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:34:38.047717   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:34:38.052380   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:34:38.060998   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:34:38.061072   53990 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:34:38.359210   53990 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:34:38.386801   53990 cache_images.go:92] LoadImages completed in 671.839501ms
	W0229 18:34:38.386918   53990 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0229 18:34:38.386992   53990 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:34:38.423125   53990 cni.go:84] Creating CNI manager for ""
	I0229 18:34:38.423158   53990 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:34:38.423176   53990 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:34:38.423199   53990 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467811 NodeName:old-k8s-version-467811 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:34:38.423393   53990 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-467811"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467811
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.30:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:34:38.423503   53990 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-467811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:34:38.423577   53990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:34:38.438566   53990 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:34:38.438650   53990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:34:38.453359   53990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 18:34:38.477964   53990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:34:38.502349   53990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 18:34:38.534066   53990 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0229 18:34:38.539990   53990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:34:38.557291   53990 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811 for IP: 192.168.72.30
	I0229 18:34:38.557327   53990 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:38.557524   53990 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:34:38.557588   53990 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:34:38.557647   53990 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.key
	I0229 18:34:38.557666   53990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.crt with IP's: []
	I0229 18:34:38.638135   53990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.crt ...
	I0229 18:34:38.638167   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.crt: {Name:mk56f32ef252300376ddfc10e5de3f49a1214e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:38.638376   53990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.key ...
	I0229 18:34:38.638394   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.key: {Name:mk3e1333b883af66519f16943defd54a63c162ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:38.638485   53990 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key.28caea67
	I0229 18:34:38.638500   53990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt.28caea67 with IP's: [192.168.72.30 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:34:38.898944   53990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt.28caea67 ...
	I0229 18:34:38.898986   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt.28caea67: {Name:mk8dff4862dad4ce3a08c4c46b19ee50c5f1d640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:38.899184   53990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key.28caea67 ...
	I0229 18:34:38.899205   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key.28caea67: {Name:mkd434d47b82a38f024cc2430a84569a754f327d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:38.899326   53990 certs.go:337] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt.28caea67 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt
	I0229 18:34:38.899443   53990 certs.go:341] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key.28caea67 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key
	I0229 18:34:38.899545   53990 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key
	I0229 18:34:38.899569   53990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.crt with IP's: []
	I0229 18:34:39.044450   53990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.crt ...
	I0229 18:34:39.044491   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.crt: {Name:mk64319f3224aa460ff318ce74243456d77b0aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:39.044722   53990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key ...
	I0229 18:34:39.044745   53990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key: {Name:mkde13ec3e514533928b1b0c78aca81e8a93802b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:34:39.045025   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:34:39.045083   53990 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:34:39.045103   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:34:39.045143   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:34:39.045182   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:34:39.045209   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:34:39.045259   53990 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:34:39.045783   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:34:39.078107   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:34:39.113355   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:34:39.148277   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:34:39.180788   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:34:39.215355   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:34:39.253008   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:34:39.291726   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:34:39.333718   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:34:39.383526   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:34:39.424158   53990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:34:39.462777   53990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:34:39.491185   53990 ssh_runner.go:195] Run: openssl version
	I0229 18:34:39.499846   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:34:39.517395   53990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:34:39.524676   53990 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:34:39.524752   53990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:34:39.533980   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:34:39.554539   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:34:39.571825   53990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:34:39.578984   53990 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:34:39.579057   53990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:34:39.588042   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:34:39.606277   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:34:39.624185   53990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:34:39.630834   53990 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:34:39.630895   53990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:34:39.639268   53990 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:34:39.657318   53990 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:34:39.663077   53990 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:34:39.663163   53990 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:34:39.663375   53990 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:34:39.689331   53990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:34:39.705598   53990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:34:39.721332   53990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:34:39.737469   53990 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:34:39.737520   53990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:34:39.888161   53990 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:34:39.888235   53990 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:34:40.287298   53990 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:34:40.287422   53990 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:34:40.287531   53990 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:34:40.528115   53990 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:34:40.530839   53990 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:34:40.544232   53990 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:34:40.721748   53990 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:34:40.724394   53990 out.go:204]   - Generating certificates and keys ...
	I0229 18:34:40.724501   53990 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:34:40.724579   53990 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:34:40.942186   53990 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:34:41.092727   53990 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:34:41.227802   53990 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:34:41.500011   53990 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:34:41.578558   53990 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:34:41.578848   53990 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0229 18:34:42.060544   53990 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:34:42.060874   53990 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	I0229 18:34:42.292088   53990 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:34:42.768555   53990 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:34:43.025508   53990 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:34:43.025817   53990 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:34:43.562736   53990 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:34:43.980931   53990 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:34:44.101330   53990 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:34:44.171579   53990 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:34:44.172528   53990 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:34:44.174249   53990 out.go:204]   - Booting up control plane ...
	I0229 18:34:44.174379   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:34:44.180611   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:34:44.182430   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:34:44.184045   53990 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:34:44.187894   53990 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:35:24.184992   53990 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:35:24.185697   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:24.185984   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:29.186541   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:29.186844   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:39.189963   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:39.190202   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:35:59.189826   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:35:59.190065   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:39.190391   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:36:39.190997   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:36:39.191022   53990 kubeadm.go:322] 
	I0229 18:36:39.191122   53990 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:36:39.191228   53990 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:36:39.191243   53990 kubeadm.go:322] 
	I0229 18:36:39.191322   53990 kubeadm.go:322] This error is likely caused by:
	I0229 18:36:39.191404   53990 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:36:39.191672   53990 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:36:39.191689   53990 kubeadm.go:322] 
	I0229 18:36:39.191964   53990 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:36:39.192068   53990 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:36:39.192172   53990 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:36:39.192183   53990 kubeadm.go:322] 
	I0229 18:36:39.192446   53990 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:36:39.192685   53990 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:36:39.192866   53990 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:36:39.192983   53990 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:36:39.193149   53990 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:36:39.193249   53990 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:36:39.194509   53990 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:36:39.194690   53990 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:36:39.194820   53990 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:36:39.194943   53990 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:36:39.195080   53990 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:36:39.195222   53990 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-467811 localhost] and IPs [192.168.72.30 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:36:39.195309   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:36:39.681779   53990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:36:39.698114   53990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:36:39.709496   53990 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:36:39.709537   53990 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:36:39.881267   53990 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:36:39.915206   53990 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:36:40.012379   53990 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:38:36.006516   53990 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:38:36.006633   53990 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:38:36.008331   53990 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:38:36.008391   53990 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:38:36.008476   53990 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:38:36.008622   53990 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:38:36.008737   53990 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:38:36.008865   53990 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:38:36.008993   53990 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:38:36.009074   53990 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:38:36.009155   53990 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:38:36.011175   53990 out.go:204]   - Generating certificates and keys ...
	I0229 18:38:36.011292   53990 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:38:36.011405   53990 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:38:36.011524   53990 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:38:36.011629   53990 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:38:36.011759   53990 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:38:36.011847   53990 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:38:36.011953   53990 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:38:36.012034   53990 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:38:36.012145   53990 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:38:36.012246   53990 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:38:36.012325   53990 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:38:36.012412   53990 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:38:36.012477   53990 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:38:36.012522   53990 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:38:36.012577   53990 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:38:36.012648   53990 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:38:36.012754   53990 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:38:36.014285   53990 out.go:204]   - Booting up control plane ...
	I0229 18:38:36.014375   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:38:36.014458   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:38:36.014549   53990 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:38:36.014671   53990 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:38:36.014839   53990 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:38:36.014914   53990 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:38:36.014994   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:38:36.015147   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:38:36.015205   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:38:36.015427   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:38:36.015547   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:38:36.015825   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:38:36.015919   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:38:36.016175   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:38:36.016283   53990 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:38:36.016512   53990 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:38:36.016524   53990 kubeadm.go:322] 
	I0229 18:38:36.016592   53990 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:38:36.016655   53990 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:38:36.016686   53990 kubeadm.go:322] 
	I0229 18:38:36.016735   53990 kubeadm.go:322] This error is likely caused by:
	I0229 18:38:36.016785   53990 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:38:36.016900   53990 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:38:36.016909   53990 kubeadm.go:322] 
	I0229 18:38:36.017031   53990 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:38:36.017081   53990 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:38:36.017126   53990 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:38:36.017137   53990 kubeadm.go:322] 
	I0229 18:38:36.017283   53990 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:38:36.017422   53990 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:38:36.017534   53990 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:38:36.017602   53990 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:38:36.017699   53990 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:38:36.017822   53990 kubeadm.go:406] StartCluster complete in 3m56.354664464s
	I0229 18:38:36.017919   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:38:36.018012   53990 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:38:36.050379   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.050404   53990 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:38:36.050458   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:38:36.073442   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.073477   53990 logs.go:278] No container was found matching "etcd"
	I0229 18:38:36.073537   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:38:36.102887   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.102915   53990 logs.go:278] No container was found matching "coredns"
	I0229 18:38:36.102977   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:38:36.126038   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.126065   53990 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:38:36.126122   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:38:36.149058   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.149089   53990 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:38:36.149155   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:38:36.168804   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.168835   53990 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:38:36.168895   53990 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:38:36.189748   53990 logs.go:276] 0 containers: []
	W0229 18:38:36.189779   53990 logs.go:278] No container was found matching "kindnet"
	I0229 18:38:36.189793   53990 logs.go:123] Gathering logs for kubelet ...
	I0229 18:38:36.189809   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:38:36.246130   53990 logs.go:123] Gathering logs for dmesg ...
	I0229 18:38:36.246163   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:38:36.263150   53990 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:38:36.263179   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:38:36.337294   53990 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:38:36.337320   53990 logs.go:123] Gathering logs for Docker ...
	I0229 18:38:36.337341   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:38:36.380441   53990 logs.go:123] Gathering logs for container status ...
	I0229 18:38:36.380474   53990 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:38:36.448817   53990 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:38:36.448883   53990 out.go:239] * 
	* 
	W0229 18:38:36.448943   53990 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:38:36.448977   53990 out.go:239] * 
	* 
	W0229 18:38:36.449833   53990 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:38:36.453093   53990 out.go:177] 
	W0229 18:38:36.454435   53990 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:38:36.454495   53990 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:38:36.454525   53990 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:38:36.456057   53990 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 6 (279.980121ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:38:36.780047   60554 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-467811" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467811" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-467811 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-467811 create -f testdata/busybox.yaml: exit status 1 (48.584805ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-467811" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-467811 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 6 (258.181985ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:38:37.089489   60593 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-467811" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467811" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 6 (246.252531ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:38:37.335455   60622 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-467811" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467811" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (90.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467811 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0229 18:38:37.568262   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:42.066666   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:42.688673   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:44.855072   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:52.929798   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:39:02.463843   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:39:05.336126   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:39:13.409990   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:39:23.027686   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:39:28.711066   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:28.716335   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:28.726611   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:28.746902   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:28.787344   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:28.867842   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:29.028278   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:29.348858   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:29.990004   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:31.271138   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:33.831567   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:38.952758   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:46.296601   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:39:49.192884   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:39:50.757295   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:39:54.370369   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467811 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m30.611040631s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-467811 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-467811 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-467811 describe deploy/metrics-server -n kube-system: exit status 1 (44.490594ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-467811" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-467811 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 6 (258.308577ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0229 18:40:08.249098   60899 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-467811" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-467811" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (90.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (520.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E0229 18:40:17.206747   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:40:18.988045   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:18.993332   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.003673   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.023989   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.064288   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.144623   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.305040   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:19.625407   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:20.265811   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:21.546994   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:23.103182   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:40:23.977160   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:40:24.107578   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:29.228098   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:39.468529   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:40:43.520900   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 18:40:44.889425   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:40:44.947889   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:40:50.633810   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
E0229 18:40:50.788687   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:40:59.949379   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:41:00.469621   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 18:41:08.217122   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:41:16.291180   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:41:18.619944   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:41:40.910425   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:41:46.304703   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:41:47.021613   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:42:06.913591   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:42:12.554921   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: exit status 109 (8m38.455397779s)

                                                
                                                
-- stdout --
	* [old-k8s-version-467811] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node old-k8s-version-467811 in cluster old-k8s-version-467811
	* Restarting existing kvm2 VM for "old-k8s-version-467811" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:40:10.678453   61028 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:40:10.678600   61028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:40:10.678609   61028 out.go:304] Setting ErrFile to fd 2...
	I0229 18:40:10.678613   61028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:40:10.678886   61028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:40:10.679419   61028 out.go:298] Setting JSON to false
	I0229 18:40:10.680444   61028 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4961,"bootTime":1709227050,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:40:10.680511   61028 start.go:139] virtualization: kvm guest
	I0229 18:40:10.683118   61028 out.go:177] * [old-k8s-version-467811] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:40:10.684759   61028 notify.go:220] Checking for updates...
	I0229 18:40:10.684785   61028 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:40:10.686379   61028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:40:10.687868   61028 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:40:10.689608   61028 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:40:10.691092   61028 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:40:10.692630   61028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:40:10.694677   61028 config.go:182] Loaded profile config "old-k8s-version-467811": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:40:10.695147   61028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:40:10.695205   61028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:40:10.714865   61028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0229 18:40:10.715270   61028 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:40:10.715818   61028 main.go:141] libmachine: Using API Version  1
	I0229 18:40:10.715842   61028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:40:10.716234   61028 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:40:10.716460   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:10.718544   61028 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0229 18:40:10.720020   61028 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:40:10.720346   61028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:40:10.720396   61028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:40:10.734886   61028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0229 18:40:10.735372   61028 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:40:10.735916   61028 main.go:141] libmachine: Using API Version  1
	I0229 18:40:10.735938   61028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:40:10.736252   61028 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:40:10.736453   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:10.772340   61028 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:40:10.773723   61028 start.go:299] selected driver: kvm2
	I0229 18:40:10.773739   61028 start.go:903] validating driver "kvm2" against &{Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:40:10.773862   61028 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:40:10.775248   61028 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:40:10.775356   61028 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:40:10.790470   61028 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:40:10.790806   61028 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:40:10.790888   61028 cni.go:84] Creating CNI manager for ""
	I0229 18:40:10.790907   61028 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:40:10.790917   61028 start_flags.go:323] config:
	{Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:40:10.791056   61028 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:40:10.792794   61028 out.go:177] * Starting control plane node old-k8s-version-467811 in cluster old-k8s-version-467811
	I0229 18:40:10.794232   61028 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:40:10.794278   61028 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 18:40:10.794295   61028 cache.go:56] Caching tarball of preloaded images
	I0229 18:40:10.794383   61028 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:40:10.794397   61028 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 18:40:10.794513   61028 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/config.json ...
	I0229 18:40:10.794687   61028 start.go:365] acquiring machines lock for old-k8s-version-467811: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:40:10.794737   61028 start.go:369] acquired machines lock for "old-k8s-version-467811" in 31.284µs
	I0229 18:40:10.794751   61028 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:40:10.794758   61028 fix.go:54] fixHost starting: 
	I0229 18:40:10.794986   61028 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:40:10.795015   61028 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:40:10.809952   61028 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43003
	I0229 18:40:10.810342   61028 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:40:10.810769   61028 main.go:141] libmachine: Using API Version  1
	I0229 18:40:10.810791   61028 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:40:10.811088   61028 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:40:10.811283   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:10.811426   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetState
	I0229 18:40:10.813101   61028 fix.go:102] recreateIfNeeded on old-k8s-version-467811: state=Stopped err=<nil>
	I0229 18:40:10.813128   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	W0229 18:40:10.813298   61028 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:40:10.815261   61028 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-467811" ...
	I0229 18:40:10.816626   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .Start
	I0229 18:40:10.816778   61028 main.go:141] libmachine: (old-k8s-version-467811) Ensuring networks are active...
	I0229 18:40:10.817524   61028 main.go:141] libmachine: (old-k8s-version-467811) Ensuring network default is active
	I0229 18:40:10.817876   61028 main.go:141] libmachine: (old-k8s-version-467811) Ensuring network mk-old-k8s-version-467811 is active
	I0229 18:40:10.818352   61028 main.go:141] libmachine: (old-k8s-version-467811) Getting domain xml...
	I0229 18:40:10.819257   61028 main.go:141] libmachine: (old-k8s-version-467811) Creating domain...
	I0229 18:40:12.124193   61028 main.go:141] libmachine: (old-k8s-version-467811) Waiting to get IP...
	I0229 18:40:12.125273   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:12.125768   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:12.125883   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:12.125759   61063 retry.go:31] will retry after 229.865272ms: waiting for machine to come up
	I0229 18:40:12.357523   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:12.358149   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:12.358192   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:12.358122   61063 retry.go:31] will retry after 287.869142ms: waiting for machine to come up
	I0229 18:40:12.647707   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:12.648277   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:12.648300   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:12.648245   61063 retry.go:31] will retry after 293.400738ms: waiting for machine to come up
	I0229 18:40:12.943894   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:12.944389   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:12.944434   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:12.944338   61063 retry.go:31] will retry after 491.233058ms: waiting for machine to come up
	I0229 18:40:13.436881   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:13.437490   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:13.437512   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:13.437440   61063 retry.go:31] will retry after 590.256223ms: waiting for machine to come up
	I0229 18:40:14.029692   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:14.030181   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:14.030204   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:14.030148   61063 retry.go:31] will retry after 576.352262ms: waiting for machine to come up
	I0229 18:40:14.607851   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:14.608422   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:14.608452   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:14.608372   61063 retry.go:31] will retry after 850.910374ms: waiting for machine to come up
	I0229 18:40:15.461172   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:15.461758   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:15.461790   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:15.461704   61063 retry.go:31] will retry after 1.268595352s: waiting for machine to come up
	I0229 18:40:16.732363   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:16.732899   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:16.732926   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:16.732847   61063 retry.go:31] will retry after 1.176515987s: waiting for machine to come up
	I0229 18:40:17.911232   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:17.911816   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:17.911840   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:17.911762   61063 retry.go:31] will retry after 2.323351402s: waiting for machine to come up
	I0229 18:40:20.236694   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:20.237443   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:20.237474   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:20.237368   61063 retry.go:31] will retry after 1.758464638s: waiting for machine to come up
	I0229 18:40:21.998068   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:21.998805   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:21.998891   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:21.998810   61063 retry.go:31] will retry after 2.583727042s: waiting for machine to come up
	I0229 18:40:24.584657   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:24.585215   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | unable to find current IP address of domain old-k8s-version-467811 in network mk-old-k8s-version-467811
	I0229 18:40:24.585242   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | I0229 18:40:24.585181   61063 retry.go:31] will retry after 3.761661072s: waiting for machine to come up
	I0229 18:40:28.349441   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.349967   61028 main.go:141] libmachine: (old-k8s-version-467811) Found IP for machine: 192.168.72.30
	I0229 18:40:28.349994   61028 main.go:141] libmachine: (old-k8s-version-467811) Reserving static IP address...
	I0229 18:40:28.350009   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has current primary IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.350457   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "old-k8s-version-467811", mac: "52:54:00:44:95:0e", ip: "192.168.72.30"} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.350477   61028 main.go:141] libmachine: (old-k8s-version-467811) Reserved static IP address: 192.168.72.30
	I0229 18:40:28.350495   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | skip adding static IP to network mk-old-k8s-version-467811 - found existing host DHCP lease matching {name: "old-k8s-version-467811", mac: "52:54:00:44:95:0e", ip: "192.168.72.30"}
	I0229 18:40:28.350511   61028 main.go:141] libmachine: (old-k8s-version-467811) Waiting for SSH to be available...
	I0229 18:40:28.350525   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | Getting to WaitForSSH function...
	I0229 18:40:28.352532   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.352814   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.352833   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.353001   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH client type: external
	I0229 18:40:28.353023   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa (-rw-------)
	I0229 18:40:28.353050   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.30 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:40:28.353078   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | About to run SSH command:
	I0229 18:40:28.353092   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | exit 0
	I0229 18:40:28.476200   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | SSH cmd err, output: <nil>: 
	I0229 18:40:28.476562   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetConfigRaw
	I0229 18:40:28.477255   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:40:28.479963   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.480335   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.480376   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.480574   61028 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/config.json ...
	I0229 18:40:28.480800   61028 machine.go:88] provisioning docker machine ...
	I0229 18:40:28.480832   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:28.481042   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:40:28.481247   61028 buildroot.go:166] provisioning hostname "old-k8s-version-467811"
	I0229 18:40:28.481269   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:40:28.481395   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:28.483883   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.484188   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.484219   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.484328   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:28.484469   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:28.484649   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:28.484811   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:28.484986   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:28.485223   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:28.485239   61028 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-467811 && echo "old-k8s-version-467811" | sudo tee /etc/hostname
	I0229 18:40:28.604162   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-467811
	
	I0229 18:40:28.604194   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:28.607085   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.607338   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.607368   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.607518   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:28.607701   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:28.607888   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:28.608071   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:28.608288   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:28.608491   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:28.608509   61028 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-467811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-467811/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-467811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:40:28.726673   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:40:28.726715   61028 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:40:28.726743   61028 buildroot.go:174] setting up certificates
	I0229 18:40:28.726760   61028 provision.go:83] configureAuth start
	I0229 18:40:28.726776   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetMachineName
	I0229 18:40:28.727074   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:40:28.730429   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.730848   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.730882   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.731215   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:28.733926   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.734367   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.734421   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.734622   61028 provision.go:138] copyHostCerts
	I0229 18:40:28.734724   61028 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:40:28.734761   61028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:40:28.734935   61028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:40:28.735198   61028 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:40:28.735212   61028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:40:28.735292   61028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:40:28.735461   61028 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:40:28.735478   61028 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:40:28.735521   61028 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:40:28.735662   61028 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-467811 san=[192.168.72.30 192.168.72.30 localhost 127.0.0.1 minikube old-k8s-version-467811]
	I0229 18:40:28.843116   61028 provision.go:172] copyRemoteCerts
	I0229 18:40:28.843178   61028 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:40:28.843199   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:28.846321   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.846640   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:28.846671   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:28.846972   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:28.847183   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:28.847341   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:28.847490   61028 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:40:28.939114   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:40:28.965614   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0229 18:40:28.992085   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:40:29.019199   61028 provision.go:86] duration metric: configureAuth took 292.422618ms
	I0229 18:40:29.019228   61028 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:40:29.019440   61028 config.go:182] Loaded profile config "old-k8s-version-467811": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 18:40:29.019471   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:29.019796   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:29.022767   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.023271   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:29.023304   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.023531   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:29.023798   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.023976   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.024142   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:29.024312   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:29.024519   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:29.024531   61028 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:40:29.138062   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:40:29.138091   61028 buildroot.go:70] root file system type: tmpfs
	I0229 18:40:29.138249   61028 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:40:29.138287   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:29.141256   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.141605   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:29.141636   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.141823   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:29.142015   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.142235   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.142387   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:29.142540   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:29.142725   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:29.142781   61028 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:40:29.267135   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:40:29.267173   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:29.270110   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.270517   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:29.270572   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:29.270743   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:29.270934   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.271071   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:29.271245   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:29.271424   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:29.271621   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:29.271664   61028 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:40:30.107814   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:40:30.107849   61028 machine.go:91] provisioned docker machine in 1.627037807s
	I0229 18:40:30.107860   61028 start.go:300] post-start starting for "old-k8s-version-467811" (driver="kvm2")
	I0229 18:40:30.107870   61028 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:40:30.107884   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:30.108194   61028 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:40:30.108223   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:30.111120   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.111505   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:30.111536   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.111697   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:30.111894   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:30.112070   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:30.112208   61028 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:40:30.200210   61028 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:40:30.204782   61028 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:40:30.204807   61028 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:40:30.204861   61028 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:40:30.204936   61028 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:40:30.205047   61028 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:40:30.215764   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:40:30.243488   61028 start.go:303] post-start completed in 135.614479ms
	I0229 18:40:30.243518   61028 fix.go:56] fixHost completed within 19.448757798s
	I0229 18:40:30.243545   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:30.246196   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.246589   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:30.246624   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.246792   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:30.247000   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:30.247142   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:30.247273   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:30.247462   61028 main.go:141] libmachine: Using SSH client type: native
	I0229 18:40:30.247612   61028 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.72.30 22 <nil> <nil>}
	I0229 18:40:30.247621   61028 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:40:30.356801   61028 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232030.335906014
	
	I0229 18:40:30.356827   61028 fix.go:206] guest clock: 1709232030.335906014
	I0229 18:40:30.356837   61028 fix.go:219] Guest: 2024-02-29 18:40:30.335906014 +0000 UTC Remote: 2024-02-29 18:40:30.243521701 +0000 UTC m=+19.613669094 (delta=92.384313ms)
	I0229 18:40:30.356876   61028 fix.go:190] guest clock delta is within tolerance: 92.384313ms
	I0229 18:40:30.356881   61028 start.go:83] releasing machines lock for "old-k8s-version-467811", held for 19.562134572s
	I0229 18:40:30.356902   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:30.357174   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:40:30.359958   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.360331   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:30.360361   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.360572   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:30.361052   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:30.361254   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .DriverName
	I0229 18:40:30.361341   61028 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:40:30.361376   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:30.361467   61028 ssh_runner.go:195] Run: cat /version.json
	I0229 18:40:30.361498   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHHostname
	I0229 18:40:30.364141   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.364181   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.364549   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:30.364599   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:30.364627   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.364644   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:30.364767   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:30.364964   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHPort
	I0229 18:40:30.364966   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:30.365169   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHKeyPath
	I0229 18:40:30.365177   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:30.365316   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetSSHUsername
	I0229 18:40:30.365355   61028 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:40:30.365460   61028 sshutil.go:53] new ssh client: &{IP:192.168.72.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/old-k8s-version-467811/id_rsa Username:docker}
	I0229 18:40:30.464189   61028 ssh_runner.go:195] Run: systemctl --version
	I0229 18:40:30.471236   61028 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:40:30.477530   61028 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:40:30.477588   61028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:40:30.489429   61028 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:40:30.510736   61028 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:40:30.510763   61028 start.go:475] detecting cgroup driver to use...
	I0229 18:40:30.510869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:40:30.543914   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 18:40:30.557228   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:40:30.568762   61028 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:40:30.568825   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:40:30.580353   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:40:30.591859   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:40:30.603319   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:40:30.614961   61028 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:40:30.626354   61028 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:40:30.637821   61028 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:40:30.648152   61028 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:40:30.658797   61028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:40:30.795985   61028 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:40:30.821835   61028 start.go:475] detecting cgroup driver to use...
	I0229 18:40:30.821918   61028 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:40:30.839285   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:40:30.865917   61028 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:40:30.885988   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:40:30.901028   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:40:30.917179   61028 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:40:30.947585   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:40:30.962683   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:40:30.982885   61028 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:40:30.987189   61028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:40:30.996658   61028 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:40:31.014701   61028 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:40:31.145249   61028 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:40:31.285997   61028 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:40:31.286137   61028 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:40:31.307114   61028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:40:31.442389   61028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:40:32.837992   61028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.395559648s)
	I0229 18:40:32.838060   61028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:40:32.868200   61028 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:40:32.899476   61028 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 18:40:32.899528   61028 main.go:141] libmachine: (old-k8s-version-467811) Calling .GetIP
	I0229 18:40:32.902168   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:32.902536   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:95:0e", ip: ""} in network mk-old-k8s-version-467811: {Iface:virbr2 ExpiryTime:2024-02-29 19:40:22 +0000 UTC Type:0 Mac:52:54:00:44:95:0e Iaid: IPaddr:192.168.72.30 Prefix:24 Hostname:old-k8s-version-467811 Clientid:01:52:54:00:44:95:0e}
	I0229 18:40:32.902571   61028 main.go:141] libmachine: (old-k8s-version-467811) DBG | domain old-k8s-version-467811 has defined IP address 192.168.72.30 and MAC address 52:54:00:44:95:0e in network mk-old-k8s-version-467811
	I0229 18:40:32.902778   61028 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0229 18:40:32.907317   61028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:40:32.921500   61028 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 18:40:32.921553   61028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:40:32.944724   61028 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:40:32.944751   61028 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:40:32.944804   61028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:40:32.955816   61028 ssh_runner.go:195] Run: which lz4
	I0229 18:40:32.959963   61028 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:40:32.964524   61028 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:40:32.964553   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 18:40:34.465883   61028 docker.go:649] Took 1.505952 seconds to copy over tarball
	I0229 18:40:34.465973   61028 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:40:36.737232   61028 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.271207639s)
	I0229 18:40:36.737260   61028 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:40:36.773359   61028 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:40:36.784345   61028 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 18:40:36.804299   61028 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:40:36.933480   61028 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:40:40.539911   61028 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.606390492s)
	I0229 18:40:40.540013   61028 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:40:40.562940   61028 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 18:40:40.562970   61028 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 18:40:40.562981   61028 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:40:40.565129   61028 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 18:40:40.565161   61028 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:40:40.565177   61028 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 18:40:40.565213   61028 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:40:40.565134   61028 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:40:40.565352   61028 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:40:40.565426   61028 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:40:40.565521   61028 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:40:40.566025   61028 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:40:40.566100   61028 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0229 18:40:40.566113   61028 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:40:40.566241   61028 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 18:40:40.566260   61028 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:40:40.566259   61028 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:40:40.566260   61028 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:40:40.566818   61028 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:40:40.704508   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 18:40:40.705644   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:40:40.712252   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:40:40.721098   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 18:40:40.723695   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 18:40:40.729679   61028 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 18:40:40.729726   61028 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	I0229 18:40:40.729768   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 18:40:40.737946   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:40:40.738321   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:40:40.741620   61028 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 18:40:40.741665   61028 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:40:40.741720   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 18:40:40.800244   61028 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 18:40:40.800293   61028 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:40:40.800355   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 18:40:40.800693   61028 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 18:40:40.800741   61028 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 18:40:40.800791   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 18:40:40.805827   61028 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 18:40:40.805881   61028 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 18:40:40.805917   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 18:40:40.841580   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0229 18:40:40.841655   61028 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 18:40:40.841707   61028 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:40:40.841759   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 18:40:40.843065   61028 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 18:40:40.843107   61028 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:40:40.843142   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0229 18:40:40.843146   61028 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 18:40:40.878829   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0229 18:40:40.885719   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0229 18:40:40.885765   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0229 18:40:40.888416   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0229 18:40:40.890600   61028 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0229 18:40:41.131239   61028 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:40:41.152234   61028 cache_images.go:92] LoadImages completed in 589.219691ms
	W0229 18:40:41.152321   61028 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1: no such file or directory
	I0229 18:40:41.152392   61028 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:40:41.181526   61028 cni.go:84] Creating CNI manager for ""
	I0229 18:40:41.181561   61028 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:40:41.181580   61028 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:40:41.181603   61028 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.30 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-467811 NodeName:old-k8s-version-467811 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.30 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:40:41.181768   61028 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.30
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-467811"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.30
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.30"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-467811
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.72.30:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:40:41.181890   61028 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-467811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.30
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:40:41.181957   61028 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 18:40:41.192999   61028 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:40:41.193095   61028 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:40:41.203350   61028 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0229 18:40:41.222086   61028 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:40:41.240000   61028 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2178 bytes)
	I0229 18:40:41.259524   61028 ssh_runner.go:195] Run: grep 192.168.72.30	control-plane.minikube.internal$ /etc/hosts
	I0229 18:40:41.263759   61028 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.30	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:40:41.280005   61028 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811 for IP: 192.168.72.30
	I0229 18:40:41.280056   61028 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:40:41.280236   61028 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:40:41.280296   61028 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:40:41.280418   61028 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/client.key
	I0229 18:40:41.280494   61028 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key.28caea67
	I0229 18:40:41.280559   61028 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key
	I0229 18:40:41.280697   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:40:41.280741   61028 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:40:41.280752   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:40:41.280772   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:40:41.280800   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:40:41.280826   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:40:41.280868   61028 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:40:41.281550   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:40:41.312605   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:40:41.338820   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:40:41.367309   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/old-k8s-version-467811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0229 18:40:41.394978   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:40:41.420871   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:40:41.448751   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:40:41.476754   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:40:41.504882   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:40:41.531256   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:40:41.560374   61028 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:40:41.587925   61028 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:40:41.608279   61028 ssh_runner.go:195] Run: openssl version
	I0229 18:40:41.614235   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:40:41.625237   61028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:40:41.629787   61028 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:40:41.629860   61028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:40:41.642510   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:40:41.656598   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:40:41.668231   61028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:40:41.673285   61028 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:40:41.673332   61028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:40:41.679923   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:40:41.691990   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:40:41.705952   61028 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:40:41.710807   61028 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:40:41.710860   61028 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:40:41.716948   61028 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:40:41.732149   61028 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:40:41.736881   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:40:41.743070   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:40:41.749580   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:40:41.756130   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:40:41.762439   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:40:41.769106   61028 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:40:41.775117   61028 kubeadm.go:404] StartCluster: {Name:old-k8s-version-467811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-467811 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.30 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:40:41.775259   61028 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:40:41.793765   61028 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:40:41.804794   61028 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:40:41.804820   61028 kubeadm.go:636] restartCluster start
	I0229 18:40:41.804877   61028 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:40:41.815078   61028 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:41.816059   61028 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-467811" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:40:41.816687   61028 kubeconfig.go:146] "old-k8s-version-467811" context is missing from /home/jenkins/minikube-integration/18259-6402/kubeconfig - will repair!
	I0229 18:40:41.817704   61028 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:40:41.819360   61028 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:40:41.829904   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:41.829968   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:41.843979   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:42.330445   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:42.330530   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:42.344332   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:42.830003   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:42.830102   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:42.844521   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:43.330067   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:43.330150   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:43.344566   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:43.830062   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:43.830172   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:43.843668   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:44.330188   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:44.330322   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:44.344136   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:44.830745   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:44.830846   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:44.852098   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:45.330719   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:45.330812   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:45.345357   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:45.830207   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:45.830285   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:45.844469   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:46.330001   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:46.330068   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:46.344740   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:46.830239   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:46.830354   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:46.844720   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:47.330016   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:47.330080   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:47.345385   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:47.830038   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:47.830141   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:47.845388   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:48.329955   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:48.330060   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:48.349026   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:48.830232   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:48.830369   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:48.844797   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:49.330322   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:49.330398   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:49.345416   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:49.829935   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:49.830044   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:49.844190   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:50.330919   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:50.331039   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:50.346369   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:50.830881   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:50.830978   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:50.845704   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:51.330316   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:51.330418   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:51.344811   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:51.830494   61028 api_server.go:166] Checking apiserver status ...
	I0229 18:40:51.830584   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:40:51.844322   61028 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:40:51.844363   61028 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:40:51.844373   61028 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:40:51.844443   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:40:51.862509   61028 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:40:51.883576   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:40:51.894082   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:40:51.894134   61028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:40:51.904982   61028 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:40:51.905004   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:40:52.043559   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:40:52.986295   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:40:53.265224   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:40:53.366162   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:40:53.446997   61028 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:40:53.447089   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:53.947157   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:54.447900   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:54.947286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:55.447940   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:55.947782   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:56.448043   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:56.947242   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:57.447568   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:57.947587   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:58.447250   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:58.947771   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:59.447406   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:40:59.948133   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:00.447188   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:00.947323   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:01.447881   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:01.947666   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:02.447912   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:02.947274   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:03.447864   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:03.948123   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:04.447832   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:04.947862   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:05.447747   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:05.947465   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:06.447905   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:06.948194   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:07.447445   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:07.947278   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:08.447612   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:08.947298   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:09.447329   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:09.947225   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:10.447132   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:10.947203   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:11.447778   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:11.947618   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:12.447286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:12.947761   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:13.447245   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:13.947889   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:14.447482   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:14.947739   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:15.448109   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:15.947511   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:16.447733   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:16.947394   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:17.447907   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:17.947362   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:18.447472   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:18.947577   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:19.447156   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:19.947155   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:20.447214   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:20.948126   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:21.448079   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:21.947874   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:22.447391   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:22.947863   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:23.447217   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:23.947233   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:24.447937   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:24.948027   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:25.448015   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:25.947814   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:26.447282   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:26.948133   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:27.447573   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:27.947481   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:28.447253   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:28.947762   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:29.447830   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:29.947289   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:30.447805   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:30.948038   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:31.447942   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:31.947691   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:32.448177   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:32.947174   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:33.447592   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:33.947771   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:34.448120   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:34.947174   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:35.447311   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:35.947938   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:36.447917   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:36.948095   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:37.447193   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:37.947216   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:38.447258   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:38.947979   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:39.447383   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:39.947821   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:40.447758   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:40.947166   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:41.448046   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:41.947274   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:42.447323   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:42.948073   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:43.447935   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:43.948096   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:44.447869   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:44.947363   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:45.447306   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:45.948001   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:46.447369   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:46.947142   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:47.448148   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:47.947924   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:48.447203   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:48.947839   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:49.447755   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:49.947614   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:50.447946   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:50.947210   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:51.447253   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:51.947349   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:52.447457   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:52.947973   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:53.447143   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:41:53.471892   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.471922   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:53.471982   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:41:53.490646   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.490673   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:41:53.490750   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:41:53.509575   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.509602   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:41:53.509662   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:41:53.535101   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.535131   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:53.535219   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:41:53.554430   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.554457   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:53.554513   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:41:53.582958   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.582999   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:53.583056   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:41:53.609108   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.609141   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:53.609206   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:41:53.636613   61028 logs.go:276] 0 containers: []
	W0229 18:41:53.636643   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:53.636654   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:53.636687   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:53.691990   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:53.692021   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:53.707245   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:53.707277   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:53.777103   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:53.777132   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:41:53.777148   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:41:53.818951   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:41:53.818981   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:56.388670   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:56.407311   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:41:56.426316   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.426348   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:56.426400   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:41:56.449716   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.449742   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:41:56.449814   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:41:56.473306   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.473335   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:41:56.473413   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:41:56.493169   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.493216   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:56.493273   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:41:56.512763   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.512789   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:56.512851   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:41:56.530581   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.530612   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:56.530667   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:41:56.558582   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.558620   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:56.558688   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:41:56.592206   61028 logs.go:276] 0 containers: []
	W0229 18:41:56.592239   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:56.592250   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:41:56.592285   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:41:56.655843   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:41:56.655879   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:56.721216   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:56.721246   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:56.791791   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:56.791835   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:56.806726   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:56.806774   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:56.890322   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:59.390673   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:41:59.405215   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:41:59.426413   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.426442   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:41:59.426496   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:41:59.448166   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.448193   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:41:59.448245   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:41:59.468900   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.468930   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:41:59.469009   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:41:59.492394   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.492422   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:41:59.492479   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:41:59.511941   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.511968   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:41:59.512019   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:41:59.541365   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.541400   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:41:59.541461   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:41:59.590578   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.590611   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:41:59.590685   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:41:59.613214   61028 logs.go:276] 0 containers: []
	W0229 18:41:59.613245   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:41:59.613257   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:41:59.613271   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:41:59.689952   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:41:59.689984   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:41:59.744773   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:41:59.744808   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:41:59.760173   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:41:59.760207   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:41:59.841333   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:41:59.841360   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:41:59.841376   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:02.405194   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:02.420502   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:02.438987   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.439026   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:02.439070   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:02.461741   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.461774   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:02.461837   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:02.482807   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.482839   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:02.482891   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:02.504090   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.504120   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:02.504180   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:02.532849   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.532893   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:02.532960   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:02.553645   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.553675   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:02.553732   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:02.580328   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.580356   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:02.580417   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:02.610807   61028 logs.go:276] 0 containers: []
	W0229 18:42:02.610840   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:02.610854   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:02.610869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:02.681324   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:02.681369   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:02.696105   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:02.696133   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:02.767710   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:02.767739   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:02.767750   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:02.810227   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:02.810263   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:05.368493   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:05.383079   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:05.404181   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.404216   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:05.404276   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:05.424024   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.424050   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:05.424108   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:05.443595   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.443679   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:05.443744   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:05.461752   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.461773   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:05.461811   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:05.481686   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.481717   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:05.481769   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:05.500626   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.500651   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:05.500706   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:05.526590   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.526622   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:05.526668   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:05.556125   61028 logs.go:276] 0 containers: []
	W0229 18:42:05.556152   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:05.556164   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:05.556179   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:05.626067   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:05.626100   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:05.643608   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:05.643662   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:05.718049   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:05.718071   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:05.718085   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:05.765674   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:05.765706   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:08.324837   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:08.339443   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:08.358468   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.358497   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:08.358557   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:08.376936   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.376961   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:08.377007   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:08.395801   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.395824   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:08.395869   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:08.414768   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.414798   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:08.414859   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:08.433790   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.433819   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:08.433891   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:08.455531   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.455564   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:08.455627   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:08.476700   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.476732   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:08.476785   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:08.499573   61028 logs.go:276] 0 containers: []
	W0229 18:42:08.499605   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:08.499616   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:08.499629   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:08.565355   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:08.565399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:08.587490   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:08.587516   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:08.676534   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:08.676556   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:08.676572   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:08.720490   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:08.720522   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:11.285250   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:11.300987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:11.320461   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.320490   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:11.320541   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:11.338867   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.338896   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:11.338951   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:11.358324   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.358349   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:11.358410   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:11.377369   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.377443   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:11.377495   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:11.395937   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.395967   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:11.396023   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:11.417778   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.417805   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:11.417879   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:11.439478   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.439508   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:11.439564   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:11.460426   61028 logs.go:276] 0 containers: []
	W0229 18:42:11.460449   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:11.460461   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:11.460473   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:11.516936   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:11.516972   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:11.612278   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:11.612306   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:11.669010   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:11.669040   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:11.685227   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:11.685259   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:11.760348   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:14.261240   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:14.277590   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:14.303446   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.303476   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:14.303530   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:14.328329   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.328353   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:14.328420   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:14.350991   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.351016   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:14.351070   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:14.371773   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.371797   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:14.371854   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:14.391749   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.391778   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:14.391835   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:14.413317   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.413350   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:14.413406   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:14.436124   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.436144   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:14.436200   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:14.457218   61028 logs.go:276] 0 containers: []
	W0229 18:42:14.457246   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:14.457259   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:14.457273   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:14.541945   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:14.541990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:14.563056   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:14.563095   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:14.658980   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:14.659001   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:14.659015   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:14.714712   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:14.714758   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:17.283517   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:17.297932   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:17.317725   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.317753   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:17.317802   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:17.336453   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.336478   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:17.336530   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:17.356379   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.356412   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:17.356471   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:17.376567   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.376598   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:17.376657   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:17.394566   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.394596   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:17.394654   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:17.412665   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.412694   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:17.412752   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:17.435002   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.435026   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:17.435102   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:17.453865   61028 logs.go:276] 0 containers: []
	W0229 18:42:17.453897   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:17.453910   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:17.453930   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:17.570363   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:17.570398   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:17.570412   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:17.615370   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:17.615407   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:17.688714   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:17.688740   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:17.747370   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:17.747399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:20.264874   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:20.279410   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:20.298746   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.298778   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:20.298829   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:20.319286   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.319317   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:20.319375   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:20.342532   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.342561   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:20.342619   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:20.360793   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.360827   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:20.360883   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:20.379994   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.380023   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:20.380082   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:20.398568   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.398592   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:20.398639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:20.416424   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.416462   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:20.416519   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:20.434980   61028 logs.go:276] 0 containers: []
	W0229 18:42:20.435013   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:20.435027   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:20.435041   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:20.516738   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:20.516772   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:20.585967   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:20.586001   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:20.601759   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:20.601798   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:20.674982   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:20.675004   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:20.675023   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:23.233011   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:23.247131   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:23.266435   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.266466   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:23.266530   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:23.285435   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.285469   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:23.285530   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:23.305373   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.305397   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:23.305453   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:23.326904   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.326936   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:23.326994   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:23.346159   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.346191   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:23.346251   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:23.364108   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.364133   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:23.364183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:23.383996   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.384027   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:23.384088   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:23.403130   61028 logs.go:276] 0 containers: []
	W0229 18:42:23.403163   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:23.403175   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:23.403189   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:23.458391   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:23.458427   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:23.478510   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:23.478553   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:23.603851   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:23.603875   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:23.603899   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:23.665086   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:23.665114   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:26.236366   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:26.254384   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:26.275583   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.275615   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:26.275704   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:26.294017   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.294043   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:26.294091   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:26.314992   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.315015   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:26.315069   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:26.333494   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.333517   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:26.333564   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:26.355417   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.355450   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:26.355507   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:26.378864   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.378893   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:26.378945   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:26.395969   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.395995   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:26.396047   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:26.416378   61028 logs.go:276] 0 containers: []
	W0229 18:42:26.416407   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:26.416418   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:26.416432   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:26.475946   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:26.475981   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:26.497923   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:26.497959   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:26.603792   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:26.603812   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:26.603824   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:26.670954   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:26.670992   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:29.241172   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:29.261361   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:29.282390   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.282414   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:29.282474   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:29.302946   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.302977   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:29.303032   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:29.322191   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.322226   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:29.322290   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:29.341679   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.341712   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:29.341768   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:29.361809   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.361836   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:29.361893   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:29.381474   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.381501   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:29.381549   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:29.405507   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.405531   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:29.405589   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:29.426398   61028 logs.go:276] 0 containers: []
	W0229 18:42:29.426427   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:29.426439   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:29.426453   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:29.511316   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:29.511341   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:29.511373   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:29.556881   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:29.556917   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:29.659296   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:29.659331   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:29.725946   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:29.725993   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:32.245361   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:32.261980   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:32.283232   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.283259   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:32.283332   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:32.305608   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.305641   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:32.305696   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:32.329948   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.329980   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:32.330037   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:32.354396   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.354427   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:32.354481   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:32.379106   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.379137   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:32.379194   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:32.403688   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.403721   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:32.403784   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:32.428679   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.428709   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:32.428785   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:32.451514   61028 logs.go:276] 0 containers: []
	W0229 18:42:32.451544   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:32.451557   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:32.451577   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:32.467215   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:32.467258   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:32.571501   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:32.571546   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:32.571562   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:32.628174   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:32.628209   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:32.694605   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:32.694663   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:35.253124   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:35.270958   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:35.293128   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.293162   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:35.293219   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:35.315107   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.315139   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:35.315198   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:35.337621   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.337649   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:35.337712   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:35.359592   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.359621   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:35.359692   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:35.382538   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.382562   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:35.382617   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:35.404564   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.404596   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:35.404661   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:35.423613   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.423652   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:35.423715   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:35.441363   61028 logs.go:276] 0 containers: []
	W0229 18:42:35.441392   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:35.441417   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:35.441442   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:35.533525   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:35.533547   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:35.533561   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:35.590625   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:35.590669   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:35.676668   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:35.676693   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:35.728837   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:35.728877   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:38.247529   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:38.260914   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:38.282245   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.282271   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:38.282343   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:38.302815   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.302840   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:38.302888   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:38.323825   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.323856   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:38.323915   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:38.347234   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.347254   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:38.347294   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:38.367761   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.367781   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:38.367829   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:38.392584   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.392602   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:38.392645   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:38.416840   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.416872   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:38.416926   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:38.438709   61028 logs.go:276] 0 containers: []
	W0229 18:42:38.438726   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:38.438735   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:38.438744   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:38.492320   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:38.492357   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:38.604435   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:38.604467   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:38.676096   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:38.676132   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:38.692606   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:38.692649   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:38.787286   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:41.288041   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:41.304638   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:41.339035   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.339062   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:41.339119   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:41.373113   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.373134   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:41.373178   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:41.398717   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.398742   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:41.398790   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:41.421805   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.421831   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:41.421881   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:41.443568   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.443593   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:41.443659   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:41.463634   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.463671   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:41.463722   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:41.485661   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.485687   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:41.485733   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:41.511531   61028 logs.go:276] 0 containers: []
	W0229 18:42:41.511560   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:41.511573   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:41.511590   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:41.533518   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:41.533554   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:41.640711   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:41.640733   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:41.640747   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:41.700696   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:41.700743   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:41.765002   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:41.765033   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:44.331518   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:44.346845   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:44.366987   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.367018   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:44.367074   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:44.386963   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.386991   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:44.387045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:44.408441   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.408479   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:44.408536   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:44.429594   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.429628   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:44.429721   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:44.450755   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.450787   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:44.450845   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:44.471545   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.471581   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:44.471646   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:44.491101   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.491131   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:44.491192   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:44.510016   61028 logs.go:276] 0 containers: []
	W0229 18:42:44.510045   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:44.510057   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:44.510070   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:44.533453   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:44.533497   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:44.649439   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:44.649464   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:44.649479   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:44.701197   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:44.701229   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:44.777513   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:44.777540   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:47.336377   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:47.354653   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:47.376880   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.376903   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:47.376956   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:47.396853   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.396882   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:47.396936   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:47.417267   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.417294   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:47.417349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:47.435575   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.435605   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:47.435683   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:47.457946   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.457973   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:47.458042   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:47.477177   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.477200   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:47.477278   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:47.496217   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.496249   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:47.496303   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:47.523965   61028 logs.go:276] 0 containers: []
	W0229 18:42:47.523987   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:47.524001   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:47.524018   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:47.608163   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:47.608196   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:47.685082   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:47.685127   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:47.706844   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:47.706884   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:47.801138   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:47.801167   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:47.801184   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:50.355686   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:50.371416   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:50.394544   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.394572   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:50.394618   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:50.414519   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.414541   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:50.414597   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:50.436050   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.436082   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:50.436147   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:50.461065   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.461102   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:50.461183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:50.483283   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.483312   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:50.483367   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:50.506911   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.506939   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:50.506992   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:50.536678   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.536708   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:50.536765   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:50.561894   61028 logs.go:276] 0 containers: []
	W0229 18:42:50.561926   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:50.561950   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:50.561966   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:50.642424   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:50.642458   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:50.693113   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:50.693146   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:50.708215   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:50.708254   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:50.783800   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:50.783830   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:50.783857   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:53.338402   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:53.357513   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:53.382257   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.382284   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:53.382348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:53.408298   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.408326   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:53.408402   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:53.432552   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.432586   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:53.432649   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:53.457905   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.457938   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:53.457998   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:53.482537   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.482566   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:53.482609   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:53.501316   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.501341   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:53.501414   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:53.532356   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.532404   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:53.532460   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:53.552030   61028 logs.go:276] 0 containers: []
	W0229 18:42:53.552056   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:53.552067   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:53.552077   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:53.618956   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:53.618992   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:53.646366   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:53.646400   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:53.728041   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:53.728062   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:53.728073   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:53.774794   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:53.774822   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:56.334289   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:56.349806   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:56.369177   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.369210   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:56.369287   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:56.387089   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.387109   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:56.387162   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:56.407785   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.407813   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:56.407874   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:56.427656   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.427690   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:56.427745   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:56.448818   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.448843   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:56.448901   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:56.472102   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.472128   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:56.472182   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:56.492268   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.492295   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:56.492373   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:56.510648   61028 logs.go:276] 0 containers: []
	W0229 18:42:56.510681   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:56.510696   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:56.510732   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:56.565684   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:56.565724   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:56.580560   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:56.580590   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:56.663332   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:42:56.663366   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:56.663382   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:56.706327   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:56.706366   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:59.279020   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:42:59.292851   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:42:59.313418   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.313453   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:42:59.313507   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:42:59.331544   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.331573   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:42:59.331631   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:42:59.348612   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.348633   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:42:59.348676   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:42:59.365818   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.365842   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:42:59.365884   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:42:59.383695   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.383722   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:42:59.383786   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:42:59.405764   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.405794   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:42:59.405850   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:42:59.428072   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.428100   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:42:59.428148   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:42:59.450334   61028 logs.go:276] 0 containers: []
	W0229 18:42:59.450358   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:42:59.450368   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:42:59.450381   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:42:59.502919   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:42:59.502949   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:42:59.579473   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:42:59.579499   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:42:59.629394   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:42:59.629426   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:42:59.644310   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:42:59.644338   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:42:59.727698   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:02.228553   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:02.242072   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:02.262441   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.262471   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:02.262527   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:02.279834   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.279864   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:02.279934   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:02.300772   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.300804   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:02.300847   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:02.321489   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.321521   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:02.321577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:02.340790   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.340816   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:02.340888   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:02.360274   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.360305   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:02.360363   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:02.379733   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.379757   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:02.379806   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:02.397674   61028 logs.go:276] 0 containers: []
	W0229 18:43:02.397701   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:02.397714   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:02.397727   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:02.449078   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:02.449104   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:02.471987   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:02.472011   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:02.589323   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:02.589350   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:02.589363   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:02.634340   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:02.634372   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:05.204806   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:05.219669   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:05.240105   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.240144   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:05.240204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:05.260727   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.260750   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:05.260809   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:05.280527   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.280550   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:05.280600   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:05.305637   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.305664   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:05.305723   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:05.329029   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.329064   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:05.329133   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:05.353165   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.353192   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:05.353244   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:05.375776   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.375813   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:05.375870   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:05.396558   61028 logs.go:276] 0 containers: []
	W0229 18:43:05.396591   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:05.396605   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:05.396619   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:05.453203   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:05.453255   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:05.470963   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:05.470990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:05.575699   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:05.575727   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:05.575740   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:05.647199   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:05.647240   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:08.243954   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:08.262082   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:08.287572   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.287600   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:08.287670   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:08.310920   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.310942   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:08.310986   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:08.335680   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.335719   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:08.335777   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:08.356389   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.356419   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:08.356473   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:08.378991   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.379028   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:08.379087   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:08.401516   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.401541   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:08.401588   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:08.426151   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.426182   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:08.426241   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:08.452927   61028 logs.go:276] 0 containers: []
	W0229 18:43:08.452955   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:08.452968   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:08.452983   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:08.561123   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:08.561153   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:08.637535   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:08.637580   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:08.659882   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:08.659906   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:08.741990   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:08.742013   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:08.742032   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:11.294642   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:11.309189   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:11.332955   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.332982   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:11.333040   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:11.357817   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.357848   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:11.357908   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:11.393083   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.393114   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:11.393174   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:11.417166   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.417192   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:11.417325   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:11.439959   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.439991   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:11.440045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:11.458653   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.458685   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:11.458737   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:11.480697   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.480721   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:11.480766   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:11.500836   61028 logs.go:276] 0 containers: []
	W0229 18:43:11.500869   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:11.500882   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:11.500895   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:11.525326   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:11.525371   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:11.652296   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:11.652317   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:11.652330   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:11.721256   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:11.721306   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:11.807374   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:11.807408   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:14.371774   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:14.389890   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:14.409134   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.409162   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:14.409220   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:14.429984   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.430016   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:14.430077   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:14.458451   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.458477   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:14.458523   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:14.476417   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.476452   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:14.476510   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:14.494607   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.494641   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:14.494702   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:14.527781   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.527806   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:14.527872   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:14.556543   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.556576   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:14.556639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:14.600740   61028 logs.go:276] 0 containers: []
	W0229 18:43:14.600769   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:14.600781   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:14.600795   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:14.668093   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:14.668127   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:14.685860   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:14.685890   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:14.761043   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:14.761066   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:14.761080   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:14.820630   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:14.820669   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:17.381240   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:17.401014   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:17.426218   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.426248   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:17.426311   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:17.449344   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.449375   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:17.449432   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:17.472796   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.472828   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:17.472908   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:17.495772   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.495797   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:17.495853   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:17.529767   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.529792   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:17.529844   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:17.561417   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.561447   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:17.561503   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:17.598878   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.598916   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:17.598981   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:17.631935   61028 logs.go:276] 0 containers: []
	W0229 18:43:17.631964   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:17.631976   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:17.631991   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:17.731734   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:17.731759   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:17.731771   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:17.782153   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:17.782189   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:17.854289   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:17.854321   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:17.921792   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:17.921851   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:20.439702   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:20.457312   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:20.480161   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.480194   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:20.480250   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:20.503965   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.503994   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:20.504049   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:20.534573   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.534603   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:20.534658   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:20.567579   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.567612   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:20.567691   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:20.593672   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.593703   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:20.593762   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:20.628466   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.628497   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:20.628558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:20.656626   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.656658   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:20.656720   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:20.678925   61028 logs.go:276] 0 containers: []
	W0229 18:43:20.680942   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:20.680955   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:20.680967   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:20.747962   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:20.747993   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:20.815624   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:20.815674   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:20.833606   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:20.833632   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:20.906012   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:20.906034   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:20.906048   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:23.468297   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:23.486950   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:23.512835   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.512865   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:23.512924   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:23.532964   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.532991   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:23.533044   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:23.550668   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.550708   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:23.550787   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:23.575009   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.575036   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:23.575098   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:23.602818   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.602858   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:23.602920   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:23.626980   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.627007   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:23.627065   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:23.647759   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.647786   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:23.647842   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:23.666065   61028 logs.go:276] 0 containers: []
	W0229 18:43:23.666091   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:23.666103   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:23.666117   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:23.716176   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:23.716210   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:23.731878   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:23.731909   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:23.799757   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:23.799782   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:23.799798   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:23.844320   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:23.844354   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:26.423764   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:26.437259   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:26.457356   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.457383   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:26.457455   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:26.475724   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.475747   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:26.475803   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:26.494807   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.494845   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:26.494890   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:26.519514   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.519543   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:26.519607   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:26.545979   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.546010   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:26.546067   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:26.567197   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.567226   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:26.567285   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:26.592950   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.592985   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:26.593043   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:26.622655   61028 logs.go:276] 0 containers: []
	W0229 18:43:26.622687   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:26.622700   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:26.622714   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:26.685069   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:26.685115   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:26.699349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:26.699382   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:26.798272   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:26.798294   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:26.798320   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:26.848181   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:26.848217   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:29.416901   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:29.431817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:29.451594   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.451618   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:29.451678   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:29.475706   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.475746   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:29.475809   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:29.499205   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.499248   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:29.499302   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:29.527759   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.527790   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:29.527861   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:29.550453   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.550482   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:29.550540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:29.584915   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.584949   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:29.585006   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:29.619147   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.619178   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:29.619236   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:29.651839   61028 logs.go:276] 0 containers: []
	W0229 18:43:29.651865   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:29.651887   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:29.651901   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:29.702982   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:29.703015   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:29.717605   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:29.717630   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:29.786101   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:29.786123   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:29.786135   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:29.839442   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:29.839480   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:32.401939   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:32.419093   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:32.437942   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.437964   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:32.438012   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:32.454843   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.454868   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:32.454943   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:32.474274   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.474307   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:32.474375   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:32.494265   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.494291   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:32.494355   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:32.515775   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.515801   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:32.515863   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:32.533741   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.533775   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:32.533832   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:32.554655   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.554681   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:32.554739   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:32.577807   61028 logs.go:276] 0 containers: []
	W0229 18:43:32.577842   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:32.577854   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:32.577893   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:32.602193   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:32.602228   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:32.688281   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:32.688304   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:32.688329   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:32.735331   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:32.735365   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:32.796176   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:32.796205   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:35.347846   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:35.362084   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:35.379651   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.379681   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:35.379739   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:35.397479   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.397502   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:35.397545   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:35.422578   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.422606   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:35.422666   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:35.440483   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.440513   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:35.440560   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:35.457604   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.457627   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:35.457671   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:35.475840   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.475867   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:35.475925   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:35.494062   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.494088   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:35.494133   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:35.514929   61028 logs.go:276] 0 containers: []
	W0229 18:43:35.514959   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:35.514970   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:35.514984   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:35.567346   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:35.567380   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:35.584152   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:35.584178   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:35.665028   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:35.665054   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:35.665070   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:35.711320   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:35.711366   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:38.275730   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:38.291079   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:38.311697   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.311734   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:38.311787   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:38.328776   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.328803   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:38.328857   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:38.346181   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.346216   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:38.346259   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:38.363587   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.363619   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:38.363690   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:38.384966   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.384990   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:38.385035   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:38.403757   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.403788   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:38.403843   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:38.421370   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.421400   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:38.421446   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:38.441241   61028 logs.go:276] 0 containers: []
	W0229 18:43:38.441268   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:38.441278   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:38.441291   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:38.559358   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:38.559383   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:38.559398   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:38.609161   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:38.609211   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:38.677469   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:38.677503   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:38.732190   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:38.732226   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:41.248292   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:41.264817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:41.285796   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.285825   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:41.285890   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:41.307111   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.307141   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:41.307195   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:41.329744   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.329775   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:41.329833   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:41.353601   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.353631   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:41.353690   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:41.372103   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.372140   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:41.372216   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:41.391040   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.391063   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:41.391153   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:41.412548   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.412585   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:41.412686   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:41.431477   61028 logs.go:276] 0 containers: []
	W0229 18:43:41.431505   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:41.431517   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:41.431531   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:41.482603   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:41.482635   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:41.505449   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:41.505486   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:41.614522   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:41.614544   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:41.614560   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:41.660181   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:41.660214   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:44.229062   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:44.245502   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:44.267915   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.267946   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:44.268012   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:44.289036   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.289074   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:44.289133   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:44.308884   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.308912   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:44.308970   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:44.327325   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.327356   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:44.327415   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:44.350711   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.350741   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:44.350799   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:44.375697   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.375727   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:44.375786   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:44.397020   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.397051   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:44.397110   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:44.420391   61028 logs.go:276] 0 containers: []
	W0229 18:43:44.420423   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:44.420437   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:44.420451   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:44.480862   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:44.480910   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:44.582756   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:44.582788   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:44.659375   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:44.659406   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:44.678134   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:44.678161   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:44.779338   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:47.279752   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:47.295727   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:47.321254   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.321288   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:47.321348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:47.342276   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.342306   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:47.342378   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:47.364982   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.365011   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:47.365061   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:47.383755   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.383787   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:47.383846   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:47.402042   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.402066   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:47.402118   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:47.424900   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.424930   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:47.424987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:47.448348   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.448381   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:47.448440   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:47.468213   61028 logs.go:276] 0 containers: []
	W0229 18:43:47.468260   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:47.468269   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:47.468279   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:47.489835   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:47.489895   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:47.580170   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:47.580193   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:47.580209   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:47.639910   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:47.639945   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:47.705270   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:47.705306   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:50.265693   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:50.279785   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:50.301314   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.301342   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:50.301397   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:50.322446   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.322477   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:50.322557   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:50.341521   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.341551   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:50.341609   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:50.363765   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.363804   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:50.363870   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:50.384768   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.384798   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:50.384854   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:50.407191   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.407226   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:50.407286   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:50.429446   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.429478   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:50.429556   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:50.450070   61028 logs.go:276] 0 containers: []
	W0229 18:43:50.450095   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:50.450107   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:50.450121   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:50.533550   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:50.533573   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:50.533590   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:50.581490   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:50.581542   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:50.657508   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:50.657540   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:50.716221   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:50.716255   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:53.232649   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:53.247242   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:53.265400   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.265433   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:53.265493   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:53.285972   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.286002   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:53.286061   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:53.305455   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.305485   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:53.305539   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:53.326893   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.326919   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:53.326974   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:53.344984   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.345019   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:53.345072   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:53.364055   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.364085   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:53.364140   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:53.387480   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.387521   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:53.387583   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:53.406645   61028 logs.go:276] 0 containers: []
	W0229 18:43:53.406676   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:53.406687   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:53.406716   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:53.423885   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:53.423912   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:53.492468   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:53.492493   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:53.492510   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:53.541818   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:53.541858   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:53.621273   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:53.621301   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:56.174461   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:56.189040   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:56.207504   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.207531   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:56.207577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:56.224573   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.224595   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:56.224643   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:56.241744   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.241771   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:56.241824   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:56.259510   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.259542   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:56.259599   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:56.277195   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.277234   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:56.277286   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:56.295783   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.295822   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:56.295884   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:56.314248   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.314273   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:56.314333   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:56.332726   61028 logs.go:276] 0 containers: []
	W0229 18:43:56.332754   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:56.332767   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:56.332778   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:56.385051   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:56.385083   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:56.400424   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:56.400455   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:56.470278   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:43:56.470334   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:56.470364   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:56.515525   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:56.515560   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:59.089329   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:43:59.102960   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:43:59.122107   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.122137   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:43:59.122200   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:43:59.139932   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.139957   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:43:59.140012   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:43:59.159740   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.159766   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:43:59.159819   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:43:59.177716   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.177741   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:43:59.177792   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:43:59.196917   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.196948   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:43:59.197008   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:43:59.215357   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.215381   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:43:59.215425   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:43:59.233699   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.233735   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:43:59.233792   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:43:59.251864   61028 logs.go:276] 0 containers: []
	W0229 18:43:59.251895   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:43:59.251907   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:43:59.251918   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:43:59.297701   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:43:59.297746   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:43:59.361700   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:43:59.361732   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:43:59.411566   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:43:59.411598   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:43:59.427349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:43:59.427374   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:43:59.496720   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:01.997181   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:02.011206   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:02.030099   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.030125   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:02.030173   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:02.048060   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.048086   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:02.048144   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:02.066190   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.066220   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:02.066284   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:02.085484   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.085509   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:02.085568   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:02.109533   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.109559   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:02.109615   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:02.131800   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.131822   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:02.131864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:02.151122   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.151154   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:02.151208   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:02.171811   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.171846   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:02.171859   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:02.171873   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:02.216251   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:02.216284   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:02.276667   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:02.276698   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:02.328533   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:02.328564   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:02.344290   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:02.344329   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:02.414487   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:04.915506   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:04.930595   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:04.949852   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.949885   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:04.949943   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:04.968164   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.968193   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:04.968252   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:04.987171   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.987196   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:04.987241   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:05.004487   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.004517   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:05.004575   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:05.022570   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.022604   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:05.022659   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:05.040454   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.040481   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:05.040540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:05.061471   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.061502   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:05.061558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:05.079346   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.079377   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:05.079389   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:05.079404   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:05.093664   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:05.093691   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:05.164031   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:05.164048   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:05.164058   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:05.207561   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:05.207596   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:05.263450   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:05.263484   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:07.813986   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:07.834016   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:07.856292   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.856330   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:07.856390   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:07.874903   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.874933   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:07.874988   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:07.893822   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.893849   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:07.893904   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:07.911815   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.911840   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:07.911896   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:07.930733   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.930763   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:07.930821   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:07.950028   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.950062   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:07.950118   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:07.969192   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.969219   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:07.969281   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:07.988711   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.988733   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:07.988742   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:07.988752   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:08.031566   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:08.031601   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:08.091610   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:08.091651   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:08.143480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:08.143515   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:08.159139   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:08.159166   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:08.238088   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:10.738478   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:10.756305   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:10.780161   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.780191   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:10.780244   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:10.799891   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.799921   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:10.799981   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:10.815310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.815340   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:10.815401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:10.843908   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.843934   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:10.843996   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:10.864272   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.864295   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:10.864349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:10.882310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.882336   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:10.882407   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:10.899979   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.900006   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:10.900064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:10.917343   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.917373   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:10.917385   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:10.917399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:10.970492   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:10.970529   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:10.985824   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:10.985850   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:11.063258   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:11.063281   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:11.063296   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:11.106836   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:11.106866   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:13.671084   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:13.685411   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:13.705142   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.705173   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:13.705234   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:13.724509   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.724548   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:13.724614   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:13.744230   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.744280   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:13.744348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:13.769730   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.769759   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:13.769817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:13.799466   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.799496   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:13.799556   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:13.820793   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.820823   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:13.820887   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:13.850052   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.850082   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:13.850138   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:13.874449   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.874477   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:13.874489   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:13.874504   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:13.932481   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:13.932513   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:13.947628   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:13.947677   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:14.018240   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:14.018263   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:14.018286   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:14.059187   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:14.059217   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:16.633510   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:16.652639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:16.673532   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.673566   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:16.673618   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:16.691920   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.691945   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:16.692006   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:16.709420   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.709443   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:16.709484   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:16.727650   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.727681   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:16.727734   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:16.746267   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.746293   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:16.746344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:16.774818   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.774849   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:16.774900   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:16.799617   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.799650   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:16.799704   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:16.820466   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.820501   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:16.820515   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:16.820528   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:16.887246   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:16.887289   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:16.902847   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:16.902872   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:16.980952   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:16.980973   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:16.980990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:17.026066   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:17.026101   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:19.597286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:19.613257   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:19.630212   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.630243   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:19.630298   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:19.647871   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.647899   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:19.647953   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:19.664725   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.664760   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:19.664817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:19.682528   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.682560   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:19.682617   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:19.700820   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.700850   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:19.700917   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:19.718645   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.718673   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:19.718736   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:19.737246   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.737289   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:19.737344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:19.754748   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.754776   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:19.754793   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:19.754805   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:19.809195   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:19.809230   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:19.830327   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:19.830365   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:19.918269   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:19.918296   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:19.918313   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:19.960393   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:19.960425   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:22.520192   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:22.534228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:22.552116   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.552147   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:22.552192   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:22.574830   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.574867   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:22.574933   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:22.594718   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.594752   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:22.594810   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:22.615676   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.615711   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:22.615772   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:22.635359   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.635393   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:22.635455   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:22.655352   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.655381   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:22.655442   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:22.673481   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.673508   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:22.673562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:22.691542   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.691563   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:22.691573   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:22.691583   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:22.741934   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:22.741964   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:22.760644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:22.760681   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:22.838701   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:22.838724   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:22.838737   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:22.879863   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:22.879892   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.442546   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:25.456540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:25.476142   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.476168   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:25.476213   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:25.494185   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.494216   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:25.494275   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:25.517155   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.517187   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:25.517251   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:25.535776   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.535805   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:25.535864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:25.554255   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.554283   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:25.554326   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:25.571356   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.571383   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:25.571438   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:25.589129   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.589158   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:25.589218   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:25.607610   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.607654   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:25.607667   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:25.607683   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.669924   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:25.669954   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:25.721765   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:25.721797   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:25.748884   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:25.748919   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:25.862593   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:25.862613   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:25.862627   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.412364   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:28.426168   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:28.444018   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.444048   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:28.444104   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:28.462393   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.462422   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:28.462481   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:28.480993   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.481021   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:28.481065   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:28.498930   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.498974   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:28.499034   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:28.517355   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.517386   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:28.517452   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:28.536493   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.536522   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:28.536629   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:28.554364   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.554392   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:28.554448   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:28.573203   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.573229   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:28.573241   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:28.573260   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:28.628788   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:28.628820   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:28.647595   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:28.647631   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:28.726195   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:28.726215   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:28.726228   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.783540   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:28.783575   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:31.358413   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:31.374228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:31.392618   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.392649   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:31.392713   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:31.411406   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.411437   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:31.411497   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:31.431126   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.431157   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:31.431204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:31.451504   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.451531   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:31.451571   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:31.470318   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.470339   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:31.470388   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:31.489264   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.489289   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:31.489341   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:31.507636   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.507672   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:31.507730   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:31.526580   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.526602   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:31.526614   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:31.526634   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:31.568164   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:31.568199   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:31.627762   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:31.627786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:31.678480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:31.678514   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:31.695623   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:31.695659   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:31.793131   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.293320   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:34.307693   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:34.328775   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.328805   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:34.328863   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:34.347049   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.347075   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:34.347126   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:34.365903   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.365933   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:34.365993   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:34.383898   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.383932   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:34.383995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:34.402605   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.402632   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:34.402694   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:34.420889   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.420918   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:34.420976   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:34.439973   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.440000   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:34.440059   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:34.457452   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.457483   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:34.457496   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:34.457510   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:34.505134   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:34.505167   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:34.520181   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:34.520212   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:34.589435   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.589455   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:34.589466   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:34.634139   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:34.634168   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:37.197653   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:37.211167   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:37.233259   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.233294   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:37.233349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:37.254237   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.254264   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:37.254322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:37.274320   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.274347   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:37.274401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:37.292854   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.292880   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:37.292929   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:37.310405   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.310429   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:37.310466   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:37.328374   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.328394   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:37.328434   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:37.345294   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.345321   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:37.345383   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:37.362743   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.362768   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:37.362779   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:37.362793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:37.410877   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:37.410914   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:37.425653   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:37.425689   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:37.490957   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:37.490981   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:37.490994   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:37.530316   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:37.530344   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:40.088251   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:40.102064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:40.121304   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.121338   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:40.121392   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:40.139634   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.139682   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:40.139742   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:40.156924   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.156950   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:40.156995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:40.174050   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.174076   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:40.174117   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:40.191417   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.191444   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:40.191488   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:40.209488   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.209515   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:40.209578   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:40.226753   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.226775   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:40.226828   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:40.244478   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.244505   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:40.244516   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:40.244526   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:40.299257   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:40.299293   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:40.316326   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:40.316356   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:40.407508   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:40.407531   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:40.407545   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:40.450989   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:40.451022   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:43.024851   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.040954   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:43.067062   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.067087   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:43.067142   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:43.112898   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.112929   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:43.112987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:43.144432   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.144516   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:43.144577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:43.180141   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.180170   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:43.180217   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:43.203493   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.203521   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:43.203562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:43.227035   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.227065   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:43.227120   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:43.247867   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.247897   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:43.247959   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:43.269511   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.269538   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:43.269550   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:43.269566   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:43.287349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:43.287380   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:43.368033   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:43.368051   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:43.368062   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:43.425200   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:43.425235   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:43.492870   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:43.492906   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:46.045085   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:46.060842   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:46.080115   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.080151   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:46.080204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:46.098951   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.098977   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:46.099045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:46.117884   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.117914   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:46.117962   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:46.135090   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.135122   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:46.135183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:46.154068   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.154094   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:46.154150   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:46.175259   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.175291   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:46.175348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:46.199979   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.200010   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:46.200073   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:46.219082   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.219109   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:46.219118   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:46.219129   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:46.285752   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:46.285802   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:46.362896   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:46.362923   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:46.424465   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:46.424496   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:46.440644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:46.440676   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:46.516207   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.017356   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:49.036558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:49.062037   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.062073   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:49.062122   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:49.089359   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.089383   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:49.089436   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:49.112366   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.112397   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:49.112447   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:49.135268   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.135300   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:49.135357   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:49.158768   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.158795   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:49.158862   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:49.182032   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.182056   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:49.182100   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:49.202844   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.202880   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:49.202937   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:49.223496   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.223522   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:49.223533   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:49.223548   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:49.283784   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:49.283833   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:49.299408   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:49.299450   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:49.381751   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.381777   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:49.381793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:49.425633   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:49.425671   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:51.992923   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:52.009101   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:52.030751   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.030778   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:52.030834   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:52.051175   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.051205   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:52.051258   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:52.070270   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.070292   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:52.070346   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:52.089729   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.089755   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:52.089807   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:52.109158   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.109181   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:52.109235   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:52.127440   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.127464   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:52.127509   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:52.146458   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.146485   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:52.146542   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:52.164899   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.164925   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:52.164934   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:52.164944   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:52.223827   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:52.223870   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:52.245832   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:52.245869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:52.350010   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:52.350037   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:52.350051   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:52.400763   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:52.400792   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:54.965688   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:54.984737   61028 kubeadm.go:640] restartCluster took 4m13.179905747s
	W0229 18:44:54.984813   61028 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 18:44:54.984842   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:44:55.440354   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:55.456286   61028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:55.467480   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:55.478159   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:55.478205   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:44:55.539798   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:44:55.539888   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:44:55.752087   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:44:55.752264   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:44:55.752401   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:44:55.906569   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:44:55.907774   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:44:55.917392   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:44:56.046677   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:44:56.048655   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:44:56.048771   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:44:56.048874   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:44:56.048992   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:44:56.052691   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:44:56.052805   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:44:56.052890   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:44:56.052984   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:44:56.053096   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:44:56.053215   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:44:56.053320   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:44:56.053379   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:44:56.053475   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:44:56.176574   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:44:56.329888   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:44:56.623253   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:44:56.722273   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:44:56.723020   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:44:56.724880   61028 out.go:204]   - Booting up control plane ...
	I0229 18:44:56.725005   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:44:56.730320   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:44:56.731630   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:44:56.732332   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:44:56.734500   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:45:36.735482   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:45:36.736181   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:36.736433   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:41.737158   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:41.737332   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:51.737722   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:51.737923   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:11.738541   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:11.738773   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:51.739942   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:51.740223   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:51.740253   61028 kubeadm.go:322] 
	I0229 18:46:51.740302   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:51.740342   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:51.740349   61028 kubeadm.go:322] 
	I0229 18:46:51.740377   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:51.740404   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:51.740528   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:51.740544   61028 kubeadm.go:322] 
	I0229 18:46:51.740646   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:51.740675   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:51.740726   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:51.740736   61028 kubeadm.go:322] 
	I0229 18:46:51.740844   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:51.740950   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:51.741029   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:51.741103   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:51.741204   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:51.741261   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:51.742036   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:46:51.742190   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:46:51.742337   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:46:51.742464   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:51.742640   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:51.742725   61028 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:46:51.742786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:46:52.197144   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:52.214163   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:46:52.226374   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:46:52.226416   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:46:52.285152   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:52.285314   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:52.500283   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:52.500430   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:52.500558   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:52.672731   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:52.672847   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:52.681682   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:52.809851   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:52.811832   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:52.811937   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:52.812027   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:52.812099   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:52.812153   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:52.812252   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:52.812333   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:52.812427   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:52.812513   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:52.812652   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:52.813069   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:52.813244   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:52.813324   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:52.931955   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:53.294257   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:53.376114   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:53.620085   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:53.620974   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:53.622696   61028 out.go:204]   - Booting up control plane ...
	I0229 18:46:53.622772   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:53.627326   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:53.628386   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:53.629224   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:53.632638   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:47:33.634399   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:47:33.635096   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:33.635349   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:38.635813   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:38.636020   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:48.636649   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:48.636873   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:08.637971   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:08.638214   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639456   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:48.639757   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639779   61028 kubeadm.go:322] 
	I0229 18:48:48.639840   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:48:48.639924   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:48:48.639950   61028 kubeadm.go:322] 
	I0229 18:48:48.640004   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:48:48.640046   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:48:48.640168   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:48:48.640178   61028 kubeadm.go:322] 
	I0229 18:48:48.640273   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:48:48.640313   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:48:48.640347   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:48:48.640353   61028 kubeadm.go:322] 
	I0229 18:48:48.640439   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:48:48.640559   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:48:48.640671   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:48:48.640752   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:48:48.640864   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:48:48.640919   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:48:48.641703   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:48:48.641878   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:48:48.641968   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:48:48.642071   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:48:48.642249   61028 kubeadm.go:406] StartCluster complete in 8m6.867140018s
	I0229 18:48:48.642265   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:48:48.642322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:48:48.674320   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.674348   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:48:48.674398   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:48:48.695124   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.695148   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:48:48.695190   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:48:48.712218   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.712245   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:48:48.712299   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:48:48.730912   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.730939   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:48:48.730982   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:48:48.748542   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.748576   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:48:48.748622   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:48:48.765544   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.765570   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:48:48.765623   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:48:48.791193   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.791238   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:48:48.791296   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:48:48.813084   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.813119   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:48:48.813132   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:48:48.813144   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:48:48.834348   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:48:48.834373   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:48:48.911451   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:48:48.911473   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:48:48.911485   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:48:48.954088   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:48:48.954119   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:48:49.019061   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:48:49.019092   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:48:49.067347   61028 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:48:49.067396   61028 out.go:239] * 
	* 
	W0229 18:48:49.067456   61028 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.067477   61028 out.go:239] * 
	* 
	W0229 18:48:49.068210   61028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:48:49.072114   61028 out.go:177] 
	W0229 18:48:49.073581   61028 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.073626   61028 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:48:49.073649   61028 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:48:49.075293   61028 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-467811 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (284.241956ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467811 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-154269 image list                          | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | no-preload-580872 image list                           | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| addons  | enable metrics-server -p newest-cni-555986             | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-555986                  | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-555986 image list                           | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| image   | default-k8s-diff-port-270866                           | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:44:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:44:05.607270   63014 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:44:05.607394   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607403   63014 out.go:304] Setting ErrFile to fd 2...
	I0229 18:44:05.607407   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607676   63014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:44:05.608237   63014 out.go:298] Setting JSON to false
	I0229 18:44:05.609156   63014 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5196,"bootTime":1709227050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:44:05.609218   63014 start.go:139] virtualization: kvm guest
	I0229 18:44:05.611560   63014 out.go:177] * [newest-cni-555986] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:44:05.613001   63014 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:44:05.614331   63014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:44:05.612955   63014 notify.go:220] Checking for updates...
	I0229 18:44:05.617084   63014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:05.618405   63014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:44:05.619690   63014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:44:05.620981   63014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:44:01.997181   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:02.011206   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:02.030099   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.030125   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:02.030173   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:02.048060   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.048086   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:02.048144   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:02.066190   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.066220   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:02.066284   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:02.085484   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.085509   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:02.085568   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:02.109533   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.109559   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:02.109615   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:02.131800   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.131822   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:02.131864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:02.151122   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.151154   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:02.151208   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:02.171811   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.171846   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:02.171859   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:02.171873   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:02.216251   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:02.216284   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:02.276667   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:02.276698   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:02.328533   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:02.328564   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:02.344290   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:02.344329   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:02.414487   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:04.915506   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:04.930595   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:04.949852   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.949885   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:04.949943   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:04.968164   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.968193   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:04.968252   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:04.987171   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.987196   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:04.987241   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:05.004487   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.004517   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:05.004575   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:05.022570   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.022604   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:05.022659   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:05.040454   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.040481   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:05.040540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:05.061471   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.061502   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:05.061558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:05.079346   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.079377   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:05.079389   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:05.079404   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:05.093664   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:05.093691   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:05.164031   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:05.164048   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:05.164058   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:05.207561   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:05.207596   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:05.263450   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:05.263484   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:05.622668   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:05.623031   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.623066   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.638058   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0229 18:44:05.638482   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.638964   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.638985   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.639298   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.639500   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.639802   63014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:44:05.640142   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.640184   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.654483   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0229 18:44:05.654869   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.655391   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.655411   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.655711   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.655946   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.692636   63014 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:44:05.694074   63014 start.go:299] selected driver: kvm2
	I0229 18:44:05.694084   63014 start.go:903] validating driver "kvm2" against &{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false
node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.694190   63014 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:44:05.694807   63014 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.694873   63014 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:44:05.709500   63014 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:44:05.710380   63014 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 18:44:05.710470   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:05.710493   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:05.710517   63014 start_flags.go:323] config:
	{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.710788   63014 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.712665   63014 out.go:177] * Starting control plane node newest-cni-555986 in cluster newest-cni-555986
	I0229 18:44:03.148306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.151204   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.713933   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:05.713962   63014 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 18:44:05.713970   63014 cache.go:56] Caching tarball of preloaded images
	I0229 18:44:05.714027   63014 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:44:05.714037   63014 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 18:44:05.714127   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:05.714292   63014 start.go:365] acquiring machines lock for newest-cni-555986: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:44:05.714330   63014 start.go:369] acquired machines lock for "newest-cni-555986" in 19.249µs
	I0229 18:44:05.714342   63014 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:44:05.714349   63014 fix.go:54] fixHost starting: 
	I0229 18:44:05.714583   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.714604   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.728926   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0229 18:44:05.729416   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.729927   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.729954   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.730372   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.730554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.730711   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:05.732365   63014 fix.go:102] recreateIfNeeded on newest-cni-555986: state=Stopped err=<nil>
	I0229 18:44:05.732405   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	W0229 18:44:05.732559   63014 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:44:05.734332   63014 out.go:177] * Restarting existing kvm2 VM for "newest-cni-555986" ...
	I0229 18:44:05.735801   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Start
	I0229 18:44:05.736011   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring networks are active...
	I0229 18:44:05.736741   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network default is active
	I0229 18:44:05.737082   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network mk-newest-cni-555986 is active
	I0229 18:44:05.737422   63014 main.go:141] libmachine: (newest-cni-555986) Getting domain xml...
	I0229 18:44:05.738474   63014 main.go:141] libmachine: (newest-cni-555986) Creating domain...
	I0229 18:44:06.970960   63014 main.go:141] libmachine: (newest-cni-555986) Waiting to get IP...
	I0229 18:44:06.971959   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:06.972427   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:06.972494   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:06.972409   63049 retry.go:31] will retry after 191.930654ms: waiting for machine to come up
	I0229 18:44:07.165902   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.166504   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.166542   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.166425   63049 retry.go:31] will retry after 380.972246ms: waiting for machine to come up
	I0229 18:44:07.549044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.549505   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.549533   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.549448   63049 retry.go:31] will retry after 409.460218ms: waiting for machine to come up
	I0229 18:44:07.960093   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.960729   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.960764   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.960680   63049 retry.go:31] will retry after 494.525541ms: waiting for machine to come up
	I0229 18:44:08.456512   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:08.457044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:08.457070   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:08.457006   63049 retry.go:31] will retry after 702.742264ms: waiting for machine to come up
	I0229 18:44:09.160839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.161340   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.161399   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.161277   63049 retry.go:31] will retry after 791.133205ms: waiting for machine to come up
	I0229 18:44:09.953571   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.954234   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.954266   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.954187   63049 retry.go:31] will retry after 1.026362572s: waiting for machine to come up
	I0229 18:44:07.813986   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:07.834016   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:07.856292   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.856330   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:07.856390   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:07.874903   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.874933   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:07.874988   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:07.893822   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.893849   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:07.893904   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:07.911815   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.911840   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:07.911896   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:07.930733   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.930763   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:07.930821   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:07.950028   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.950062   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:07.950118   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:07.969192   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.969219   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:07.969281   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:07.988711   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.988733   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:07.988742   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:07.988752   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:08.031566   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:08.031601   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:08.091610   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:08.091651   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:08.143480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:08.143515   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:08.159139   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:08.159166   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:08.238088   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:07.647412   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:09.648220   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:10.982639   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:10.983122   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:10.983154   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:10.983063   63049 retry.go:31] will retry after 1.165405321s: waiting for machine to come up
	I0229 18:44:12.150037   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:12.150578   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:12.150613   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:12.150537   63049 retry.go:31] will retry after 1.52706972s: waiting for machine to come up
	I0229 18:44:13.680375   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:13.680960   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:13.680989   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:13.680906   63049 retry.go:31] will retry after 1.671273511s: waiting for machine to come up
	I0229 18:44:15.354871   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:15.355467   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:15.355498   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:15.355404   63049 retry.go:31] will retry after 2.220860221s: waiting for machine to come up
	I0229 18:44:10.738478   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:10.756305   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:10.780161   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.780191   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:10.780244   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:10.799891   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.799921   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:10.799981   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:10.815310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.815340   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:10.815401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:10.843908   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.843934   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:10.843996   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:10.864272   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.864295   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:10.864349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:10.882310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.882336   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:10.882407   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:10.899979   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.900006   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:10.900064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:10.917343   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.917373   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:10.917385   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:10.917399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:10.970492   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:10.970529   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:10.985824   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:10.985850   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:11.063258   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:11.063281   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:11.063296   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:11.106836   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:11.106866   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:13.671084   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:13.685411   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:13.705142   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.705173   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:13.705234   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:13.724509   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.724548   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:13.724614   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:13.744230   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.744280   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:13.744348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:13.769730   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.769759   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:13.769817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:13.799466   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.799496   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:13.799556   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:13.820793   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.820823   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:13.820887   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:13.850052   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.850082   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:13.850138   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:13.874449   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.874477   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:13.874489   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:13.874504   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:13.932481   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:13.932513   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:13.947628   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:13.947677   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:14.018240   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:14.018263   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:14.018286   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:14.059187   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:14.059217   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:12.145489   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:14.145878   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:17.577867   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:17.578465   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:17.578495   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:17.578412   63049 retry.go:31] will retry after 2.588260964s: waiting for machine to come up
	I0229 18:44:20.170174   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:20.170629   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:20.170654   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:20.170589   63049 retry.go:31] will retry after 4.074488221s: waiting for machine to come up
	I0229 18:44:16.633510   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:16.652639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:16.673532   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.673566   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:16.673618   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:16.691920   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.691945   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:16.692006   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:16.709420   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.709443   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:16.709484   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:16.727650   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.727681   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:16.727734   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:16.746267   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.746293   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:16.746344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:16.774818   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.774849   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:16.774900   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:16.799617   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.799650   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:16.799704   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:16.820466   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.820501   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:16.820515   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:16.820528   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:16.887246   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:16.887289   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:16.902847   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:16.902872   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:16.980952   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:16.980973   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:16.980990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:17.026066   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:17.026101   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:19.597286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:19.613257   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:19.630212   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.630243   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:19.630298   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:19.647871   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.647899   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:19.647953   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:19.664725   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.664760   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:19.664817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:19.682528   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.682560   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:19.682617   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:19.700820   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.700850   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:19.700917   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:19.718645   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.718673   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:19.718736   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:19.737246   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.737289   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:19.737344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:19.754748   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.754776   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:19.754793   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:19.754805   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:19.809195   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:19.809230   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:19.830327   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:19.830365   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:19.918269   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:19.918296   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:19.918313   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:19.960393   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:19.960425   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:16.146999   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:18.646605   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:24.249123   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249536   63014 main.go:141] libmachine: (newest-cni-555986) Found IP for machine: 192.168.61.240
	I0229 18:44:24.249570   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has current primary IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249577   63014 main.go:141] libmachine: (newest-cni-555986) Reserving static IP address...
	I0229 18:44:24.249960   63014 main.go:141] libmachine: (newest-cni-555986) Reserved static IP address: 192.168.61.240
	I0229 18:44:24.249990   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.250000   63014 main.go:141] libmachine: (newest-cni-555986) Waiting for SSH to be available...
	I0229 18:44:24.250017   63014 main.go:141] libmachine: (newest-cni-555986) DBG | skip adding static IP to network mk-newest-cni-555986 - found existing host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"}
	I0229 18:44:24.250026   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Getting to WaitForSSH function...
	I0229 18:44:24.251971   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252153   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.252193   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH client type: external
	I0229 18:44:24.252326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa (-rw-------)
	I0229 18:44:24.252368   63014 main.go:141] libmachine: (newest-cni-555986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:44:24.252384   63014 main.go:141] libmachine: (newest-cni-555986) DBG | About to run SSH command:
	I0229 18:44:24.252417   63014 main.go:141] libmachine: (newest-cni-555986) DBG | exit 0
	I0229 18:44:24.375769   63014 main.go:141] libmachine: (newest-cni-555986) DBG | SSH cmd err, output: <nil>: 
	I0229 18:44:24.376112   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetConfigRaw
	I0229 18:44:24.376787   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.379469   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.379875   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.379924   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.380139   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:24.380315   63014 machine.go:88] provisioning docker machine ...
	I0229 18:44:24.380331   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.380554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380737   63014 buildroot.go:166] provisioning hostname "newest-cni-555986"
	I0229 18:44:24.380758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380942   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.383071   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383373   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.383403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383495   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.383671   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383843   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383976   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.384136   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.384337   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.384352   63014 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555986 && echo "newest-cni-555986" | sudo tee /etc/hostname
	I0229 18:44:24.498766   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555986
	
	I0229 18:44:24.498797   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.501346   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.501704   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501941   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.502122   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502289   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502432   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.502647   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.502863   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.502893   63014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:44:24.614045   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:44:24.614077   63014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:44:24.614100   63014 buildroot.go:174] setting up certificates
	I0229 18:44:24.614109   63014 provision.go:83] configureAuth start
	I0229 18:44:24.614117   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.614363   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.616878   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617257   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.617279   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617476   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.619950   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620245   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.620267   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620394   63014 provision.go:138] copyHostCerts
	I0229 18:44:24.620452   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:44:24.620464   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:44:24.620556   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:44:24.620684   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:44:24.620696   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:44:24.620741   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:44:24.620804   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:44:24.620813   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:44:24.620834   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:44:24.620882   63014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555986 san=[192.168.61.240 192.168.61.240 localhost 127.0.0.1 minikube newest-cni-555986]
	I0229 18:44:24.827181   63014 provision.go:172] copyRemoteCerts
	I0229 18:44:24.827251   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:44:24.827279   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.829858   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.830156   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830301   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.830508   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.830669   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.830821   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:24.912148   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:44:24.940337   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:44:24.964760   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:44:24.989172   63014 provision.go:86] duration metric: configureAuth took 375.052041ms
	I0229 18:44:24.989199   63014 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:44:24.989409   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:24.989435   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.989688   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.992106   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992563   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.992611   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.992974   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993154   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993340   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.993520   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.993692   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.993704   63014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:44:25.097791   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:44:25.097813   63014 buildroot.go:70] root file system type: tmpfs
	I0229 18:44:25.097929   63014 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:44:25.097947   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.100783   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.101236   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101447   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.101676   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.101861   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.102013   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.102184   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.102339   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.102416   63014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:44:25.226726   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:44:25.226753   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.229479   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.229789   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.229817   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.230008   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.230223   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230411   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230581   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.230775   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.230956   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.230980   63014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:44:22.520192   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:22.534228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:22.552116   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.552147   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:22.552192   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:22.574830   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.574867   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:22.574933   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:22.594718   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.594752   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:22.594810   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:22.615676   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.615711   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:22.615772   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:22.635359   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.635393   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:22.635455   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:22.655352   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.655381   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:22.655442   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:22.673481   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.673508   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:22.673562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:22.691542   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.691563   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:22.691573   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:22.691583   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:22.741934   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:22.741964   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:22.760644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:22.760681   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:22.838701   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:22.838724   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:22.838737   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:22.879863   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:22.879892   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.442546   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:25.456540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:25.476142   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.476168   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:25.476213   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:25.494185   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.494216   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:25.494275   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:25.517155   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.517187   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:25.517251   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:25.535776   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.535805   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:25.535864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:25.554255   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.554283   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:25.554326   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:25.571356   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.571383   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:25.571438   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:25.589129   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.589158   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:25.589218   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:25.607610   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.607654   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:25.607667   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:25.607683   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.669924   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:25.669954   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:21.145364   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:23.146563   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:25.146956   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:26.132356   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:44:26.132385   63014 machine.go:91] provisioned docker machine in 1.75205798s
	I0229 18:44:26.132402   63014 start.go:300] post-start starting for "newest-cni-555986" (driver="kvm2")
	I0229 18:44:26.132418   63014 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:44:26.132438   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.132741   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:44:26.132770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.135459   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135816   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.135839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135993   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.136198   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.136380   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.136509   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.220695   63014 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:44:26.225534   63014 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:44:26.225565   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:44:26.225648   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:44:26.225753   63014 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:44:26.225877   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:44:26.236218   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:26.260637   63014 start.go:303] post-start completed in 128.220021ms
	I0229 18:44:26.260663   63014 fix.go:56] fixHost completed within 20.546314149s
	I0229 18:44:26.260683   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.263403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263761   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.263791   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.264190   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264376   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264513   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.264704   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:26.264952   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:26.264972   63014 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:44:26.364534   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232266.337605764
	
	I0229 18:44:26.364556   63014 fix.go:206] guest clock: 1709232266.337605764
	I0229 18:44:26.364566   63014 fix.go:219] Guest: 2024-02-29 18:44:26.337605764 +0000 UTC Remote: 2024-02-29 18:44:26.260667088 +0000 UTC m=+20.709360868 (delta=76.938676ms)
	I0229 18:44:26.364589   63014 fix.go:190] guest clock delta is within tolerance: 76.938676ms
	I0229 18:44:26.364595   63014 start.go:83] releasing machines lock for "newest-cni-555986", held for 20.650256948s
	I0229 18:44:26.364617   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.364856   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:26.367497   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.367884   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.367914   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.368067   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368783   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368848   63014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:44:26.368893   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.369018   63014 ssh_runner.go:195] Run: cat /version.json
	I0229 18:44:26.369042   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.371814   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372058   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372159   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372329   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372406   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372429   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372561   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372642   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.372759   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372837   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.372910   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.373031   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.471860   63014 ssh_runner.go:195] Run: systemctl --version
	I0229 18:44:26.478160   63014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:44:26.483953   63014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:44:26.484004   63014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:44:26.501209   63014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:44:26.501232   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.501345   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.520439   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:44:26.532631   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:44:26.544776   63014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:44:26.544846   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:44:26.556908   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.571173   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:44:26.584793   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.599578   63014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:44:26.613065   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:44:26.625963   63014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:44:26.636208   63014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:44:26.647304   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:26.773666   63014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:44:26.805201   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.805282   63014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:44:26.828840   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.845685   63014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:44:26.864281   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.878719   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.891594   63014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:44:26.918028   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.932594   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.953389   63014 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:44:26.957403   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:44:26.966554   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:44:26.983908   63014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:44:27.099127   63014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:44:27.229263   63014 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:44:27.229402   63014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:44:27.248050   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:27.370928   63014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:44:28.846692   63014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.475728413s)
	I0229 18:44:28.846793   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:44:28.862710   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:28.876125   63014 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:44:28.990050   63014 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:44:29.111415   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.241702   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:44:29.259418   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:29.274090   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.405739   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:44:29.483337   63014 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:44:29.483415   63014 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:44:29.489731   63014 start.go:543] Will wait 60s for crictl version
	I0229 18:44:29.489807   63014 ssh_runner.go:195] Run: which crictl
	I0229 18:44:29.493965   63014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:44:29.551137   63014 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:44:29.551214   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.585366   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.616533   63014 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 18:44:29.616588   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:29.619293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619645   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:29.619671   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619927   63014 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:44:29.624040   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.638664   63014 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 18:44:29.640035   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:29.640131   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.661958   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.662001   63014 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:44:29.662060   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.681050   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.681077   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:29.681146   63014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:44:29.705900   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:29.705930   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:29.705950   63014 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 18:44:29.705973   63014 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.240 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-555986 NodeName:newest-cni-555986 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:44:29.706192   63014 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-555986"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:44:29.706334   63014 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-555986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:44:29.706410   63014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:44:29.717785   63014 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:44:29.717857   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:44:29.728573   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0229 18:44:29.746192   63014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:44:29.763094   63014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0229 18:44:29.780941   63014 ssh_runner.go:195] Run: grep 192.168.61.240	control-plane.minikube.internal$ /etc/hosts
	I0229 18:44:29.784664   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.796533   63014 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986 for IP: 192.168.61.240
	I0229 18:44:29.796569   63014 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:29.796698   63014 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:44:29.796746   63014 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:44:29.796809   63014 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/client.key
	I0229 18:44:29.796890   63014 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key.0e2de265
	I0229 18:44:29.796948   63014 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key
	I0229 18:44:29.797064   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:44:29.797094   63014 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:44:29.797103   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:44:29.797124   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:44:29.797154   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:44:29.797188   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:44:29.797243   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:29.797875   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:44:29.822101   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:44:29.847169   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:44:29.871405   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:44:29.898154   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:44:29.931310   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:44:29.957589   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:44:29.983801   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:44:30.011017   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:44:30.037607   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:44:30.067042   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:44:30.092561   63014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:44:30.111494   63014 ssh_runner.go:195] Run: openssl version
	I0229 18:44:30.117488   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:44:30.128877   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133493   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133540   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.139567   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:44:30.150842   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:44:30.161780   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166396   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166447   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.172649   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:44:30.183406   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:44:30.194175   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198677   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198732   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.204430   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:44:30.215298   63014 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:44:30.219939   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:44:30.225927   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:44:30.231724   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:44:30.237680   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:44:30.243550   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:44:30.249342   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:44:30.255106   63014 kubeadm.go:404] StartCluster: {Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false s
ystem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:30.255230   63014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:30.272612   63014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:44:30.283794   63014 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:44:30.283824   63014 kubeadm.go:636] restartCluster start
	I0229 18:44:30.283885   63014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:44:30.295185   63014 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:30.296063   63014 kubeconfig.go:135] verify returned: extract IP: "newest-cni-555986" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:30.296546   63014 kubeconfig.go:146] "newest-cni-555986" context is missing from /home/jenkins/minikube-integration/18259-6402/kubeconfig - will repair!
	I0229 18:44:30.297381   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:30.299196   63014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:44:30.309378   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.309439   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.322034   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:25.721765   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:25.721797   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:25.748884   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:25.748919   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:25.862593   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:25.862613   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:25.862627   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.412364   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:28.426168   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:28.444018   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.444048   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:28.444104   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:28.462393   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.462422   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:28.462481   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:28.480993   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.481021   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:28.481065   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:28.498930   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.498974   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:28.499034   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:28.517355   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.517386   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:28.517452   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:28.536493   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.536522   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:28.536629   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:28.554364   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.554392   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:28.554448   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:28.573203   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.573229   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:28.573241   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:28.573260   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:28.628788   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:28.628820   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:28.647595   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:28.647631   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:28.726195   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:28.726215   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:28.726228   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.783540   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:28.783575   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:27.147370   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:29.653339   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:30.810019   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.810100   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.822777   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.310338   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.310472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.324112   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.809551   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.809687   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.822657   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.310271   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.310348   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.324846   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.810460   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.810534   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.824072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.309541   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.309620   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.323749   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.810371   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.810472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.823564   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.309724   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.309805   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.322875   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.809427   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.809539   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.823871   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:35.310485   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.310554   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.324367   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.358413   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:31.374228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:31.392618   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.392649   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:31.392713   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:31.411406   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.411437   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:31.411497   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:31.431126   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.431157   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:31.431204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:31.451504   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.451531   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:31.451571   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:31.470318   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.470339   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:31.470388   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:31.489264   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.489289   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:31.489341   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:31.507636   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.507672   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:31.507730   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:31.526580   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.526602   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:31.526614   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:31.526634   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:31.568164   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:31.568199   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:31.627762   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:31.627786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:31.678480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:31.678514   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:31.695623   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:31.695659   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:31.793131   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.293320   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:34.307693   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:34.328775   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.328805   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:34.328863   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:34.347049   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.347075   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:34.347126   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:34.365903   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.365933   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:34.365993   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:34.383898   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.383932   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:34.383995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:34.402605   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.402632   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:34.402694   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:34.420889   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.420918   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:34.420976   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:34.439973   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.440000   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:34.440059   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:34.457452   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.457483   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:34.457496   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:34.457510   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:34.505134   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:34.505167   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:34.520181   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:34.520212   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:34.589435   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.589455   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:34.589466   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:34.634139   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:34.634168   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:32.149594   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:34.645888   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:35.809842   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.809911   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.823992   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.309548   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.309649   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.322861   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.810470   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.810541   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.824023   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.309492   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.309593   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.323072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.809581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.809688   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.822964   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.309476   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.309584   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.322909   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.810487   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.810602   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.824118   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.309581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.309683   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.323438   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.810045   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.810149   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.823071   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.309893   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:40.309956   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:40.326570   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.326600   63014 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:44:40.326612   63014 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:44:40.326684   63014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:40.350696   63014 docker.go:483] Stopping containers: [c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3]
	I0229 18:44:40.350775   63014 ssh_runner.go:195] Run: docker stop c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3
	I0229 18:44:40.379218   63014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:44:40.406202   63014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:40.418532   63014 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:40.418593   63014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430345   63014 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430371   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:40.561772   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:37.197653   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:37.211167   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:37.233259   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.233294   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:37.233349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:37.254237   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.254264   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:37.254322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:37.274320   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.274347   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:37.274401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:37.292854   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.292880   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:37.292929   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:37.310405   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.310429   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:37.310466   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:37.328374   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.328394   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:37.328434   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:37.345294   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.345321   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:37.345383   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:37.362743   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.362768   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:37.362779   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:37.362793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:37.410877   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:37.410914   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:37.425653   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:37.425689   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:37.490957   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:37.490981   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:37.490994   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:37.530316   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:37.530344   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:40.088251   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:40.102064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:40.121304   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.121338   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:40.121392   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:40.139634   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.139682   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:40.139742   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:40.156924   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.156950   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:40.156995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:40.174050   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.174076   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:40.174117   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:40.191417   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.191444   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:40.191488   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:40.209488   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.209515   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:40.209578   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:40.226753   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.226775   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:40.226828   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:40.244478   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.244505   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:40.244516   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:40.244526   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:40.299257   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:40.299293   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:40.316326   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:40.316356   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:40.407508   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:40.407531   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:40.407545   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:40.450989   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:40.451022   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:37.145550   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:39.645463   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:41.139942   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.337079   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.447658   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.519164   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:41.519271   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.020352   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.519558   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.020287   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.051672   63014 api_server.go:72] duration metric: took 1.532507495s to wait for apiserver process to appear ...
	I0229 18:44:43.051702   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:43.051723   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.052327   63014 api_server.go:269] stopped: https://192.168.61.240:8443/healthz: Get "https://192.168.61.240:8443/healthz": dial tcp 192.168.61.240:8443: connect: connection refused
	I0229 18:44:43.552797   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.024851   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.040954   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:43.067062   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.067087   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:43.067142   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:43.112898   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.112929   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:43.112987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:43.144432   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.144516   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:43.144577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:43.180141   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.180170   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:43.180217   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:43.203493   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.203521   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:43.203562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:43.227035   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.227065   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:43.227120   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:43.247867   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.247897   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:43.247959   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:43.269511   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.269538   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:43.269550   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:43.269566   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:43.287349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:43.287380   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:43.368033   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:43.368051   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:43.368062   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:43.425200   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:43.425235   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:43.492870   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:43.492906   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:41.648546   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:44.146476   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:46.415578   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.415614   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.415633   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.462403   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.462439   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.552650   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.559420   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:46.559454   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.052823   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.059079   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.059117   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.552719   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.561838   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.561869   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:48.052436   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:48.057072   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:48.064135   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:48.064164   63014 api_server.go:131] duration metric: took 5.012454851s to wait for apiserver health ...
	I0229 18:44:48.064173   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:48.064185   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:48.066074   63014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:44:48.067507   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:44:48.078593   63014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:44:48.102538   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:48.114933   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:48.114965   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114972   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114979   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:48.114985   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:48.114990   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:48.114995   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:44:48.115002   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:48.115006   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:48.115011   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:44:48.115017   63014 system_pods.go:74] duration metric: took 12.45428ms to wait for pod list to return data ...
	I0229 18:44:48.115024   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:48.118425   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:48.118453   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:48.118465   63014 node_conditions.go:105] duration metric: took 3.434927ms to run NodePressure ...
	I0229 18:44:48.118487   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:48.394218   63014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:44:48.407374   63014 ops.go:34] apiserver oom_adj: -16
	I0229 18:44:48.407397   63014 kubeadm.go:640] restartCluster took 18.123565128s
	I0229 18:44:48.407408   63014 kubeadm.go:406] StartCluster complete in 18.152305653s
	I0229 18:44:48.407427   63014 settings.go:142] acquiring lock: {Name:mk85324150508323d0a817853e472a1fdcadc314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.407503   63014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:48.408551   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.408794   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:44:48.408811   63014 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:44:48.408877   63014 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-555986"
	I0229 18:44:48.408884   63014 addons.go:69] Setting dashboard=true in profile "newest-cni-555986"
	I0229 18:44:48.408904   63014 addons.go:234] Setting addon dashboard=true in "newest-cni-555986"
	I0229 18:44:48.408910   63014 addons.go:69] Setting metrics-server=true in profile "newest-cni-555986"
	I0229 18:44:48.408925   63014 addons.go:234] Setting addon metrics-server=true in "newest-cni-555986"
	W0229 18:44:48.408930   63014 addons.go:243] addon dashboard should already be in state true
	W0229 18:44:48.408936   63014 addons.go:243] addon metrics-server should already be in state true
	I0229 18:44:48.408961   63014 addons.go:69] Setting default-storageclass=true in profile "newest-cni-555986"
	I0229 18:44:48.408905   63014 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-555986"
	I0229 18:44:48.408987   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	W0229 18:44:48.408996   63014 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:44:48.408999   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409016   63014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-555986"
	I0229 18:44:48.409070   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.408985   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.409048   63014 cache.go:107] acquiring lock: {Name:mk0db597c024ca72f3d806b204928d2d6d5c0ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:48.409212   63014 cache.go:115] /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 18:44:48.409221   63014 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 177.702µs
	I0229 18:44:48.409233   63014 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 18:44:48.409247   63014 cache.go:87] Successfully saved all images to host disk.
	I0229 18:44:48.409439   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409451   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409463   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409524   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409532   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409545   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409558   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409652   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409679   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409964   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.410023   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.414076   63014 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-555986" context rescaled to 1 replicas
	I0229 18:44:48.414110   63014 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:44:48.416128   63014 out.go:177] * Verifying Kubernetes components...
	I0229 18:44:48.417753   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:48.430067   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0229 18:44:48.430297   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0229 18:44:48.430412   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0229 18:44:48.430460   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0229 18:44:48.430866   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.430972   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431065   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431545   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431550   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431548   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431582   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431597   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431929   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.431972   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432206   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432253   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432290   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0229 18:44:48.432364   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.432382   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.432574   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.432606   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.432958   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432959   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.433540   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.433565   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.433650   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.434192   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.434219   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.434624   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.435113   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435154   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.435691   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435710   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.436522   63014 addons.go:234] Setting addon default-storageclass=true in "newest-cni-555986"
	W0229 18:44:48.436539   63014 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:44:48.436571   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.436949   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.436982   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.453519   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0229 18:44:48.453637   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0229 18:44:48.454123   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454220   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454725   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454745   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.454863   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454877   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.455157   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455208   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455283   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0229 18:44:48.455442   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455605   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455688   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.456149   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.456163   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.456470   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.456608   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.456786   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:48.456811   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.458869   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.461038   63014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 18:44:48.459183   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.460680   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.461477   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.462531   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 18:44:48.462548   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 18:44:48.462566   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.462647   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.462653   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.462678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.464438   63014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:44:48.462979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.465829   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.465902   63014 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.465920   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:44:48.465925   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.465937   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.466007   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I0229 18:44:48.466429   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.466991   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.467012   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.467180   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.467205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.467371   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.467432   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.467587   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.467594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.467770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.467913   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.469491   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.472294   63014 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 18:44:48.470549   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.470960   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.475176   63014 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 18:44:48.473898   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.474017   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.476581   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 18:44:48.476603   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 18:44:48.475264   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.475413   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.476620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.476878   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.477547   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0229 18:44:48.477887   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.478368   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.478381   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.478677   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.479096   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.479124   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.480199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480659   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.480684   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480955   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.481090   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.481242   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.481405   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.494480   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0229 18:44:48.494928   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.495370   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.495394   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.495667   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.495799   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.497441   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.497645   63014 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.497657   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:44:48.497667   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.500838   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.501380   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501593   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.501804   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.501963   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.502090   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.742737   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 18:44:48.742770   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 18:44:48.753599   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 18:44:48.753628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 18:44:48.765187   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.781474   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.837624   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 18:44:48.837655   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 18:44:48.847412   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 18:44:48.847440   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 18:44:48.878964   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:48.879048   63014 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:44:48.879052   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:48.879064   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:48.879082   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:48.879095   63014 cache_images.go:262] succeeded pushing to: newest-cni-555986
	I0229 18:44:48.879101   63014 cache_images.go:263] failed pushing to: 
	I0229 18:44:48.879122   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879135   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879510   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:48.879520   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879539   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.879565   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879876   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879907   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.945106   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 18:44:48.945130   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 18:44:48.946603   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:48.946628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 18:44:49.013179   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 18:44:49.013199   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 18:44:49.036118   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:49.122858   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 18:44:49.122892   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 18:44:49.215329   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 18:44:49.215361   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 18:44:49.228881   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.228905   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229150   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229175   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.229199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.229245   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.229262   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229590   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229607   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.236908   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.236931   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.237194   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.237213   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.237232   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.313570   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 18:44:49.313605   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 18:44:49.375520   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 18:44:49.375549   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 18:44:49.445233   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:49.445262   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 18:44:49.520309   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:50.293009   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.511487012s)
	I0229 18:44:50.293056   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293069   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293082   63014 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.413998817s)
	I0229 18:44:50.293122   63014 api_server.go:72] duration metric: took 1.878985811s to wait for apiserver process to appear ...
	I0229 18:44:50.293139   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:50.293159   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:50.293390   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293444   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293454   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.293472   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293745   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293858   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293880   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.300808   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:50.303536   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:50.303558   63014 api_server.go:131] duration metric: took 10.411694ms to wait for apiserver health ...
	I0229 18:44:50.303569   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:50.310252   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:50.310280   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310290   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310298   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:50.310307   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:50.310316   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:50.310335   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running
	I0229 18:44:50.310343   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:50.310356   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:50.310365   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running
	I0229 18:44:50.310376   63014 system_pods.go:74] duration metric: took 6.800137ms to wait for pod list to return data ...
	I0229 18:44:50.310386   63014 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:44:50.313209   63014 default_sa.go:45] found service account: "default"
	I0229 18:44:50.313231   63014 default_sa.go:55] duration metric: took 2.835138ms for default service account to be created ...
	I0229 18:44:50.313244   63014 kubeadm.go:581] duration metric: took 1.899107276s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 18:44:50.313262   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:50.315732   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:50.315752   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:50.315765   63014 node_conditions.go:105] duration metric: took 2.49465ms to run NodePressure ...
	I0229 18:44:50.315778   63014 start.go:228] waiting for startup goroutines ...
	I0229 18:44:50.412181   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.376016712s)
	I0229 18:44:50.412237   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412253   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412517   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412562   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412602   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412620   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412632   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412844   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412879   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412886   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412909   63014 addons.go:470] Verifying addon metrics-server=true in "newest-cni-555986"
	I0229 18:44:50.642086   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.121716457s)
	I0229 18:44:50.642146   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642162   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642465   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642487   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642498   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642506   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642526   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.642764   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642774   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642777   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.644564   63014 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-555986 addons enable metrics-server
	
	I0229 18:44:50.646195   63014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 18:44:46.045085   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:46.060842   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:46.080115   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.080151   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:46.080204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:46.098951   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.098977   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:46.099045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:46.117884   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.117914   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:46.117962   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:46.135090   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.135122   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:46.135183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:46.154068   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.154094   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:46.154150   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:46.175259   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.175291   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:46.175348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:46.199979   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.200010   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:46.200073   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:46.219082   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.219109   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:46.219118   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:46.219129   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:46.285752   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:46.285802   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:46.362896   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:46.362923   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:46.424465   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:46.424496   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:46.440644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:46.440676   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:46.516207   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.017356   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:49.036558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:49.062037   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.062073   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:49.062122   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:49.089359   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.089383   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:49.089436   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:49.112366   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.112397   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:49.112447   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:49.135268   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.135300   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:49.135357   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:49.158768   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.158795   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:49.158862   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:49.182032   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.182056   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:49.182100   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:49.202844   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.202880   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:49.202937   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:49.223496   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.223522   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:49.223533   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:49.223548   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:49.283784   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:49.283833   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:49.299408   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:49.299450   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:49.381751   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.381777   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:49.381793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:49.425633   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:49.425671   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:50.647633   63014 addons.go:505] enable addons completed in 2.238822444s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 18:44:50.647682   63014 start.go:233] waiting for cluster config update ...
	I0229 18:44:50.647711   63014 start.go:242] writing updated cluster config ...
	I0229 18:44:50.648039   63014 ssh_runner.go:195] Run: rm -f paused
	I0229 18:44:50.699121   63014 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:44:50.700743   63014 out.go:177] * Done! kubectl is now configured to use "newest-cni-555986" cluster and "default" namespace by default
	I0229 18:44:46.147159   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:48.147947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:50.646890   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:51.992923   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:52.009101   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:52.030751   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.030778   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:52.030834   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:52.051175   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.051205   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:52.051258   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:52.070270   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.070292   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:52.070346   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:52.089729   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.089755   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:52.089807   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:52.109158   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.109181   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:52.109235   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:52.127440   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.127464   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:52.127509   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:52.146458   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.146485   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:52.146542   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:52.164899   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.164925   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:52.164934   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:52.164944   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:52.223827   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:52.223870   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:52.245832   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:52.245869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:52.350010   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:52.350037   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:52.350051   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:52.400763   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:52.400792   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:54.965688   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:54.984737   61028 kubeadm.go:640] restartCluster took 4m13.179905747s
	W0229 18:44:54.984813   61028 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 18:44:54.984842   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:44:55.440354   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:55.456286   61028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:55.467480   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:55.478159   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:55.478205   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:44:55.539798   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:44:55.539888   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:44:53.148909   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.149846   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.752087   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:44:55.752264   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:44:55.752401   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:44:55.906569   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:44:55.907774   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:44:55.917392   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:44:56.046677   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:44:56.048655   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:44:56.048771   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:44:56.048874   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:44:56.048992   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:44:56.052691   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:44:56.052805   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:44:56.052890   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:44:56.052984   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:44:56.053096   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:44:56.053215   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:44:56.053320   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:44:56.053379   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:44:56.053475   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:44:56.176574   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:44:56.329888   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:44:56.623253   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:44:56.722273   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:44:56.723020   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:44:56.724880   61028 out.go:204]   - Booting up control plane ...
	I0229 18:44:56.725005   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:44:56.730320   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:44:56.731630   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:44:56.732332   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:44:56.734500   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:44:57.646118   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:59.648032   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:02.144840   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:04.145112   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:06.146649   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:08.647051   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:11.148318   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:13.646816   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:16.145165   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:18.146437   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:20.147686   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:22.645925   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:25.146444   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:27.645765   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:29.646621   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:31.647146   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:34.145657   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:36.735482   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:45:36.736181   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:36.736433   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:36.145891   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:38.149811   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:40.646401   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:41.737158   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:41.737332   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:43.145942   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:45.146786   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:47.648714   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:50.145240   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:51.737722   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:51.737923   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:52.145341   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:54.145559   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:56.646087   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:58.646249   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:00.646466   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:02.647293   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:05.146452   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:07.646128   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:10.147008   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:11.738541   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:11.738773   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:12.646406   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:14.647319   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:17.146097   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:19.146615   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:21.147384   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:23.646155   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:25.647369   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:28.146558   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:30.645408   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:32.649260   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:34.650076   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:37.146414   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:39.146947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:41.645903   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:43.646016   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:45.646056   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:47.646659   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:49.647440   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:51.739942   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:51.740223   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:51.740253   61028 kubeadm.go:322] 
	I0229 18:46:51.740302   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:51.740342   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:51.740349   61028 kubeadm.go:322] 
	I0229 18:46:51.740377   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:51.740404   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:51.740528   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:51.740544   61028 kubeadm.go:322] 
	I0229 18:46:51.740646   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:51.740675   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:51.740726   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:51.740736   61028 kubeadm.go:322] 
	I0229 18:46:51.740844   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:51.740950   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:51.741029   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:51.741103   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:51.741204   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:51.741261   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:51.742036   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:46:51.742190   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:46:51.742337   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:46:51.742464   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:51.742640   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:51.742725   61028 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:46:51.742786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:46:52.197144   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:52.214163   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:46:52.226374   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:46:52.226416   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:46:52.285152   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:52.285314   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:52.500283   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:52.500430   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:52.500558   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:52.672731   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:52.672847   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:52.681682   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:52.809851   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:52.811832   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:52.811937   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:52.812027   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:52.812099   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:52.812153   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:52.812252   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:52.812333   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:52.812427   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:52.812513   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:52.812652   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:52.813069   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:52.813244   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:52.813324   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:52.931955   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:53.294257   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:53.376114   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:53.620085   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:53.620974   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:53.622696   61028 out.go:204]   - Booting up control plane ...
	I0229 18:46:53.622772   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:53.627326   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:53.628386   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:53.629224   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:53.632638   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:46:52.145625   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:54.146306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:56.146385   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:58.649533   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:01.145784   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:03.648061   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:06.145955   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:07.645834   60121 pod_ready.go:81] duration metric: took 4m0.007156334s waiting for pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace to be "Ready" ...
	E0229 18:47:07.645859   60121 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 18:47:07.645869   60121 pod_ready.go:38] duration metric: took 4m1.184866089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:47:07.645887   60121 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:47:07.645945   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:07.671520   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:07.671613   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:07.691503   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:07.691571   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:07.710557   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:07.710627   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:07.730780   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:07.730868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:07.749894   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:07.749981   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:07.772545   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:07.772620   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:07.791523   60121 logs.go:276] 0 containers: []
	W0229 18:47:07.791554   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:07.791604   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:07.812744   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:07.812833   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:07.831469   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:07.831505   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:07.831515   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:07.904596   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:07.904778   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:07.929197   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:07.929234   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:07.965399   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:07.965430   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:07.997552   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:07.997582   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:08.043918   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:08.043954   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:08.068540   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:08.068569   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:08.093297   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:08.093326   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:08.160393   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:08.160432   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:08.234099   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:08.234128   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:08.249381   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:08.249406   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:08.411423   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:08.411457   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:08.440486   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:08.440516   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:08.474207   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:08.474320   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:08.498143   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498169   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:08.498225   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:08.498241   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:08.498252   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:08.498266   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498277   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:18.499396   60121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:47:18.517660   60121 api_server.go:72] duration metric: took 4m15.022647547s to wait for apiserver process to appear ...
	I0229 18:47:18.517688   60121 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:47:18.517766   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:18.542263   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:18.542333   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:18.565885   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:18.565964   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:18.585135   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:18.585213   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:18.605789   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:18.605850   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:18.624993   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:18.625062   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:18.648049   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:18.648118   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:18.668689   60121 logs.go:276] 0 containers: []
	W0229 18:47:18.668713   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:18.668759   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:18.691741   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:18.691813   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:18.713776   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:18.713810   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:18.713823   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:18.781369   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:18.781564   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:18.808924   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:18.808965   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:18.824723   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:18.824756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:18.854531   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:18.854576   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:18.897618   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:18.897650   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:18.936914   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:18.936946   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:19.011250   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:19.011280   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:19.075817   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:19.075850   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:19.200261   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:19.200299   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:19.236988   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:19.237015   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:19.269721   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:19.269750   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:19.296918   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:19.296944   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:19.319721   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:19.319753   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:19.342330   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342355   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:19.342410   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:19.342423   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:19.342429   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:19.342437   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342447   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:29.343918   60121 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8444/healthz ...
	I0229 18:47:29.350861   60121 api_server.go:279] https://192.168.39.148:8444/healthz returned 200:
	ok
	I0229 18:47:29.352541   60121 api_server.go:141] control plane version: v1.28.4
	I0229 18:47:29.352560   60121 api_server.go:131] duration metric: took 10.834865386s to wait for apiserver health ...
	I0229 18:47:29.352569   60121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:47:29.352633   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:29.373466   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:29.373535   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:29.394287   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:29.394375   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:29.415331   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:29.415410   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:29.436682   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:29.436764   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:29.456935   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:29.457003   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:29.475799   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:29.475868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:29.496876   60121 logs.go:276] 0 containers: []
	W0229 18:47:29.496904   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:29.496963   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:29.516724   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:29.516794   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:29.535652   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:29.535683   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:29.535693   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:29.559535   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:29.559563   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:29.587928   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:29.587952   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:29.610085   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:29.610111   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:29.673987   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:29.674033   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:29.751324   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:29.751355   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:29.876322   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:29.876347   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:29.900325   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:29.900349   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:29.936137   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:29.936167   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:29.969468   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:29.969499   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:30.017539   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:30.017587   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:30.093486   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.093682   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.124169   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:30.124211   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:30.140725   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:30.140756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:30.174590   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174628   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:30.174694   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:30.174708   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.174715   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.174726   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174731   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:33.634399   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:47:33.635096   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:33.635349   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:40.185191   60121 system_pods.go:59] 8 kube-system pods found
	I0229 18:47:40.185222   60121 system_pods.go:61] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.185227   60121 system_pods.go:61] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.185232   60121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.185235   60121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.185238   60121 system_pods.go:61] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.185241   60121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.185247   60121 system_pods.go:61] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.185251   60121 system_pods.go:61] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.185257   60121 system_pods.go:74] duration metric: took 10.832681757s to wait for pod list to return data ...
	I0229 18:47:40.185264   60121 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:47:40.188055   60121 default_sa.go:45] found service account: "default"
	I0229 18:47:40.188075   60121 default_sa.go:55] duration metric: took 2.8056ms for default service account to be created ...
	I0229 18:47:40.188083   60121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:47:40.199288   60121 system_pods.go:86] 8 kube-system pods found
	I0229 18:47:40.199317   60121 system_pods.go:89] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.199325   60121 system_pods.go:89] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.199330   60121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.199335   60121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.199340   60121 system_pods.go:89] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.199347   60121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.199359   60121 system_pods.go:89] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.199369   60121 system_pods.go:89] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.199383   60121 system_pods.go:126] duration metric: took 11.294328ms to wait for k8s-apps to be running ...
	I0229 18:47:40.199394   60121 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:47:40.199452   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:47:40.216650   60121 system_svc.go:56] duration metric: took 17.247343ms WaitForService to wait for kubelet.
	I0229 18:47:40.216679   60121 kubeadm.go:581] duration metric: took 4m36.72166867s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:47:40.216705   60121 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:47:40.220111   60121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:47:40.220142   60121 node_conditions.go:123] node cpu capacity is 2
	I0229 18:47:40.220157   60121 node_conditions.go:105] duration metric: took 3.446433ms to run NodePressure ...
	I0229 18:47:40.220172   60121 start.go:228] waiting for startup goroutines ...
	I0229 18:47:40.220180   60121 start.go:233] waiting for cluster config update ...
	I0229 18:47:40.220193   60121 start.go:242] writing updated cluster config ...
	I0229 18:47:40.220531   60121 ssh_runner.go:195] Run: rm -f paused
	I0229 18:47:40.268347   60121 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:47:40.270302   60121 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-270866" cluster and "default" namespace by default
	I0229 18:47:38.635813   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:38.636020   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:48.636649   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:48.636873   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:08.637971   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:08.638214   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639456   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:48.639757   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639779   61028 kubeadm.go:322] 
	I0229 18:48:48.639840   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:48:48.639924   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:48:48.639950   61028 kubeadm.go:322] 
	I0229 18:48:48.640004   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:48:48.640046   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:48:48.640168   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:48:48.640178   61028 kubeadm.go:322] 
	I0229 18:48:48.640273   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:48:48.640313   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:48:48.640347   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:48:48.640353   61028 kubeadm.go:322] 
	I0229 18:48:48.640439   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:48:48.640559   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:48:48.640671   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:48:48.640752   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:48:48.640864   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:48:48.640919   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:48:48.641703   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:48:48.641878   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:48:48.641968   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:48:48.642071   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:48:48.642249   61028 kubeadm.go:406] StartCluster complete in 8m6.867140018s
	I0229 18:48:48.642265   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:48:48.642322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:48:48.674320   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.674348   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:48:48.674398   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:48:48.695124   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.695148   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:48:48.695190   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:48:48.712218   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.712245   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:48:48.712299   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:48:48.730912   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.730939   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:48:48.730982   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:48:48.748542   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.748576   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:48:48.748622   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:48:48.765544   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.765570   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:48:48.765623   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:48:48.791193   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.791238   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:48:48.791296   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:48:48.813084   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.813119   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:48:48.813132   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:48:48.813144   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:48:48.834348   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:48:48.834373   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:48:48.911451   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:48:48.911473   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:48:48.911485   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:48:48.954088   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:48:48.954119   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:48:49.019061   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:48:49.019092   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:48:49.067347   61028 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:48:49.067396   61028 out.go:239] * 
	W0229 18:48:49.067456   61028 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.067477   61028 out.go:239] * 
	W0229 18:48:49.068210   61028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:48:49.072114   61028 out.go:177] 
	W0229 18:48:49.073581   61028 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.073626   61028 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:48:49.073649   61028 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:48:49.075293   61028 out.go:177] 
	
	
	==> Docker <==
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050425153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050467385Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050514780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050552148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050590447Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050660627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050699694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050735468Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050781822Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050897158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051019076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051064571Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051441623Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051565243Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051659095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051747686Z" level=info msg="containerd successfully booted in 0.034113s"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.252862682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.297343935Z" level=info msg="Loading containers: start."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.417489065Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.467932343Z" level=info msg="Loading containers: done."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482234448Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482355814Z" level=info msg="Daemon has completed initialization"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.517930017Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.518369987Z" level=info msg="API listen on [::]:2376"
	Feb 29 18:40:40 old-k8s-version-467811 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-29T18:48:50Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056516] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.804914] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.680946] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.687335] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.061500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060694] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +1.140707] systemd-fstab-generator[780]: Ignoring "noauto" option for root device
	[  +0.360984] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.131688] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.149280] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +5.508694] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.066369] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.235011] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.074300] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9503]: Ignoring "noauto" option for root device
	[  +0.067712] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 18:46] systemd-fstab-generator[11264]: Ignoring "noauto" option for root device
	[  +0.072343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:48:50 up 8 min,  0 users,  load average: 0.18, 0.36, 0.22
	Linux old-k8s-version-467811 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 18:48:48 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: I0229 18:48:48.835803   12908 server.go:410] Version: v1.16.0
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: I0229 18:48:48.836232   12908 plugins.go:100] No cloud provider specified.
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: I0229 18:48:48.836284   12908 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: I0229 18:48:48.838888   12908 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: W0229 18:48:48.839777   12908 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: W0229 18:48:48.839877   12908 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 18:48:48 old-k8s-version-467811 kubelet[12908]: F0229 18:48:48.839936   12908 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:48:48 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:48:48 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:48:49 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 155.
	Feb 29 18:48:49 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:48:49 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: I0229 18:48:49.530257   12956 server.go:410] Version: v1.16.0
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: I0229 18:48:49.531083   12956 plugins.go:100] No cloud provider specified.
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: I0229 18:48:49.531229   12956 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: I0229 18:48:49.539805   12956 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: W0229 18:48:49.541355   12956 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: W0229 18:48:49.541651   12956 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 18:48:49 old-k8s-version-467811 kubelet[12956]: F0229 18:48:49.541852   12956 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:48:49 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:48:49 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:48:50 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Feb 29 18:48:50 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:48:50 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (254.434716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (520.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:49:20.843462   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:49:28.711139   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:49:31.848983   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:50:09.379770   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:50:17.206756   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:50:18.987325   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:50:23.103254   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:50:23.977000   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:00.469693   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:18.620296   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:32.429589   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:36.999412   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:40.250257   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:46.149124   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:57.968352   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:57.973634   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:57.983898   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:58.004164   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:58.044479   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:58.124883   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:58.285346   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
E0229 18:51:58.605957   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:51:59.246588   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:00.527386   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:03.087778   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:04.683763   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:06.913744   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:08.208700   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:18.448939   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:38.930010   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:52:41.665504   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:01.104662   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:08.805348   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:19.891226   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:24.374687   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:29.959524   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:53:32.449328   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:54:24.149379   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:54:28.710758   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:54:41.811779   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:54:47.417998   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:54:55.492655   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:09.379608   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:17.206548   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:18.987211   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:23.102896   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:23.977789   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:55:51.756074   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:56:00.469636   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:56:18.620636   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:56:36.998748   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:56:42.033132   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:56:57.968521   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:57:06.913720   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:57:23.521295   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:57:25.652950   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (231.341658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-467811" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (230.90429ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467811 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-154269 image list                          | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | no-preload-580872 image list                           | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| addons  | enable metrics-server -p newest-cni-555986             | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-555986                  | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-555986 image list                           | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| image   | default-k8s-diff-port-270866                           | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:44:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:44:05.607270   63014 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:44:05.607394   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607403   63014 out.go:304] Setting ErrFile to fd 2...
	I0229 18:44:05.607407   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607676   63014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:44:05.608237   63014 out.go:298] Setting JSON to false
	I0229 18:44:05.609156   63014 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5196,"bootTime":1709227050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:44:05.609218   63014 start.go:139] virtualization: kvm guest
	I0229 18:44:05.611560   63014 out.go:177] * [newest-cni-555986] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:44:05.613001   63014 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:44:05.614331   63014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:44:05.612955   63014 notify.go:220] Checking for updates...
	I0229 18:44:05.617084   63014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:05.618405   63014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:44:05.619690   63014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:44:05.620981   63014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:44:01.997181   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:02.011206   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:02.030099   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.030125   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:02.030173   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:02.048060   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.048086   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:02.048144   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:02.066190   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.066220   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:02.066284   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:02.085484   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.085509   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:02.085568   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:02.109533   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.109559   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:02.109615   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:02.131800   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.131822   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:02.131864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:02.151122   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.151154   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:02.151208   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:02.171811   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.171846   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:02.171859   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:02.171873   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:02.216251   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:02.216284   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:02.276667   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:02.276698   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:02.328533   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:02.328564   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:02.344290   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:02.344329   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:02.414487   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:04.915506   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:04.930595   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:04.949852   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.949885   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:04.949943   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:04.968164   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.968193   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:04.968252   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:04.987171   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.987196   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:04.987241   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:05.004487   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.004517   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:05.004575   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:05.022570   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.022604   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:05.022659   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:05.040454   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.040481   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:05.040540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:05.061471   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.061502   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:05.061558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:05.079346   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.079377   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:05.079389   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:05.079404   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:05.093664   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:05.093691   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:05.164031   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:05.164048   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:05.164058   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:05.207561   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:05.207596   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:05.263450   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:05.263484   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:05.622668   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:05.623031   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.623066   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.638058   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0229 18:44:05.638482   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.638964   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.638985   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.639298   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.639500   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.639802   63014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:44:05.640142   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.640184   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.654483   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0229 18:44:05.654869   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.655391   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.655411   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.655711   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.655946   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.692636   63014 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:44:05.694074   63014 start.go:299] selected driver: kvm2
	I0229 18:44:05.694084   63014 start.go:903] validating driver "kvm2" against &{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false
node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.694190   63014 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:44:05.694807   63014 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.694873   63014 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:44:05.709500   63014 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:44:05.710380   63014 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 18:44:05.710470   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:05.710493   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:05.710517   63014 start_flags.go:323] config:
	{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.710788   63014 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.712665   63014 out.go:177] * Starting control plane node newest-cni-555986 in cluster newest-cni-555986
	I0229 18:44:03.148306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.151204   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.713933   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:05.713962   63014 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 18:44:05.713970   63014 cache.go:56] Caching tarball of preloaded images
	I0229 18:44:05.714027   63014 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:44:05.714037   63014 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 18:44:05.714127   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:05.714292   63014 start.go:365] acquiring machines lock for newest-cni-555986: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:44:05.714330   63014 start.go:369] acquired machines lock for "newest-cni-555986" in 19.249µs
	I0229 18:44:05.714342   63014 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:44:05.714349   63014 fix.go:54] fixHost starting: 
	I0229 18:44:05.714583   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.714604   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.728926   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0229 18:44:05.729416   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.729927   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.729954   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.730372   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.730554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.730711   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:05.732365   63014 fix.go:102] recreateIfNeeded on newest-cni-555986: state=Stopped err=<nil>
	I0229 18:44:05.732405   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	W0229 18:44:05.732559   63014 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:44:05.734332   63014 out.go:177] * Restarting existing kvm2 VM for "newest-cni-555986" ...
	I0229 18:44:05.735801   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Start
	I0229 18:44:05.736011   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring networks are active...
	I0229 18:44:05.736741   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network default is active
	I0229 18:44:05.737082   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network mk-newest-cni-555986 is active
	I0229 18:44:05.737422   63014 main.go:141] libmachine: (newest-cni-555986) Getting domain xml...
	I0229 18:44:05.738474   63014 main.go:141] libmachine: (newest-cni-555986) Creating domain...
	I0229 18:44:06.970960   63014 main.go:141] libmachine: (newest-cni-555986) Waiting to get IP...
	I0229 18:44:06.971959   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:06.972427   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:06.972494   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:06.972409   63049 retry.go:31] will retry after 191.930654ms: waiting for machine to come up
	I0229 18:44:07.165902   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.166504   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.166542   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.166425   63049 retry.go:31] will retry after 380.972246ms: waiting for machine to come up
	I0229 18:44:07.549044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.549505   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.549533   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.549448   63049 retry.go:31] will retry after 409.460218ms: waiting for machine to come up
	I0229 18:44:07.960093   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.960729   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.960764   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.960680   63049 retry.go:31] will retry after 494.525541ms: waiting for machine to come up
	I0229 18:44:08.456512   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:08.457044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:08.457070   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:08.457006   63049 retry.go:31] will retry after 702.742264ms: waiting for machine to come up
	I0229 18:44:09.160839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.161340   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.161399   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.161277   63049 retry.go:31] will retry after 791.133205ms: waiting for machine to come up
	I0229 18:44:09.953571   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.954234   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.954266   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.954187   63049 retry.go:31] will retry after 1.026362572s: waiting for machine to come up
	I0229 18:44:07.813986   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:07.834016   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:07.856292   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.856330   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:07.856390   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:07.874903   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.874933   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:07.874988   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:07.893822   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.893849   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:07.893904   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:07.911815   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.911840   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:07.911896   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:07.930733   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.930763   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:07.930821   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:07.950028   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.950062   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:07.950118   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:07.969192   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.969219   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:07.969281   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:07.988711   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.988733   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:07.988742   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:07.988752   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:08.031566   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:08.031601   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:08.091610   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:08.091651   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:08.143480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:08.143515   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:08.159139   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:08.159166   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:08.238088   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:07.647412   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:09.648220   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:10.982639   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:10.983122   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:10.983154   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:10.983063   63049 retry.go:31] will retry after 1.165405321s: waiting for machine to come up
	I0229 18:44:12.150037   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:12.150578   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:12.150613   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:12.150537   63049 retry.go:31] will retry after 1.52706972s: waiting for machine to come up
	I0229 18:44:13.680375   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:13.680960   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:13.680989   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:13.680906   63049 retry.go:31] will retry after 1.671273511s: waiting for machine to come up
	I0229 18:44:15.354871   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:15.355467   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:15.355498   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:15.355404   63049 retry.go:31] will retry after 2.220860221s: waiting for machine to come up
	I0229 18:44:10.738478   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:10.756305   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:10.780161   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.780191   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:10.780244   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:10.799891   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.799921   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:10.799981   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:10.815310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.815340   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:10.815401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:10.843908   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.843934   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:10.843996   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:10.864272   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.864295   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:10.864349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:10.882310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.882336   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:10.882407   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:10.899979   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.900006   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:10.900064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:10.917343   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.917373   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:10.917385   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:10.917399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:10.970492   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:10.970529   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:10.985824   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:10.985850   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:11.063258   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:11.063281   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:11.063296   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:11.106836   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:11.106866   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:13.671084   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:13.685411   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:13.705142   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.705173   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:13.705234   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:13.724509   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.724548   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:13.724614   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:13.744230   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.744280   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:13.744348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:13.769730   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.769759   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:13.769817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:13.799466   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.799496   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:13.799556   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:13.820793   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.820823   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:13.820887   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:13.850052   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.850082   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:13.850138   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:13.874449   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.874477   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:13.874489   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:13.874504   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:13.932481   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:13.932513   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:13.947628   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:13.947677   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:14.018240   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:14.018263   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:14.018286   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:14.059187   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:14.059217   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:12.145489   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:14.145878   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:17.577867   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:17.578465   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:17.578495   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:17.578412   63049 retry.go:31] will retry after 2.588260964s: waiting for machine to come up
	I0229 18:44:20.170174   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:20.170629   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:20.170654   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:20.170589   63049 retry.go:31] will retry after 4.074488221s: waiting for machine to come up
	I0229 18:44:16.633510   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:16.652639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:16.673532   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.673566   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:16.673618   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:16.691920   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.691945   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:16.692006   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:16.709420   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.709443   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:16.709484   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:16.727650   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.727681   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:16.727734   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:16.746267   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.746293   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:16.746344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:16.774818   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.774849   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:16.774900   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:16.799617   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.799650   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:16.799704   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:16.820466   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.820501   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:16.820515   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:16.820528   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:16.887246   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:16.887289   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:16.902847   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:16.902872   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:16.980952   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:16.980973   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:16.980990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:17.026066   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:17.026101   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:19.597286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:19.613257   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:19.630212   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.630243   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:19.630298   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:19.647871   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.647899   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:19.647953   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:19.664725   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.664760   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:19.664817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:19.682528   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.682560   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:19.682617   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:19.700820   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.700850   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:19.700917   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:19.718645   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.718673   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:19.718736   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:19.737246   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.737289   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:19.737344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:19.754748   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.754776   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:19.754793   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:19.754805   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:19.809195   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:19.809230   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:19.830327   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:19.830365   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:19.918269   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:19.918296   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:19.918313   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:19.960393   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:19.960425   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:16.146999   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:18.646605   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:24.249123   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249536   63014 main.go:141] libmachine: (newest-cni-555986) Found IP for machine: 192.168.61.240
	I0229 18:44:24.249570   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has current primary IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249577   63014 main.go:141] libmachine: (newest-cni-555986) Reserving static IP address...
	I0229 18:44:24.249960   63014 main.go:141] libmachine: (newest-cni-555986) Reserved static IP address: 192.168.61.240
	I0229 18:44:24.249990   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.250000   63014 main.go:141] libmachine: (newest-cni-555986) Waiting for SSH to be available...
	I0229 18:44:24.250017   63014 main.go:141] libmachine: (newest-cni-555986) DBG | skip adding static IP to network mk-newest-cni-555986 - found existing host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"}
	I0229 18:44:24.250026   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Getting to WaitForSSH function...
	I0229 18:44:24.251971   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252153   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.252193   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH client type: external
	I0229 18:44:24.252326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa (-rw-------)
	I0229 18:44:24.252368   63014 main.go:141] libmachine: (newest-cni-555986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:44:24.252384   63014 main.go:141] libmachine: (newest-cni-555986) DBG | About to run SSH command:
	I0229 18:44:24.252417   63014 main.go:141] libmachine: (newest-cni-555986) DBG | exit 0
	I0229 18:44:24.375769   63014 main.go:141] libmachine: (newest-cni-555986) DBG | SSH cmd err, output: <nil>: 
	I0229 18:44:24.376112   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetConfigRaw
	I0229 18:44:24.376787   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.379469   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.379875   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.379924   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.380139   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:24.380315   63014 machine.go:88] provisioning docker machine ...
	I0229 18:44:24.380331   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.380554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380737   63014 buildroot.go:166] provisioning hostname "newest-cni-555986"
	I0229 18:44:24.380758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380942   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.383071   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383373   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.383403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383495   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.383671   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383843   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383976   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.384136   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.384337   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.384352   63014 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555986 && echo "newest-cni-555986" | sudo tee /etc/hostname
	I0229 18:44:24.498766   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555986
	
	I0229 18:44:24.498797   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.501346   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.501704   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501941   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.502122   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502289   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502432   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.502647   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.502863   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.502893   63014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:44:24.614045   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:44:24.614077   63014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:44:24.614100   63014 buildroot.go:174] setting up certificates
	I0229 18:44:24.614109   63014 provision.go:83] configureAuth start
	I0229 18:44:24.614117   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.614363   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.616878   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617257   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.617279   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617476   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.619950   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620245   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.620267   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620394   63014 provision.go:138] copyHostCerts
	I0229 18:44:24.620452   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:44:24.620464   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:44:24.620556   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:44:24.620684   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:44:24.620696   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:44:24.620741   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:44:24.620804   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:44:24.620813   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:44:24.620834   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:44:24.620882   63014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555986 san=[192.168.61.240 192.168.61.240 localhost 127.0.0.1 minikube newest-cni-555986]
	I0229 18:44:24.827181   63014 provision.go:172] copyRemoteCerts
	I0229 18:44:24.827251   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:44:24.827279   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.829858   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.830156   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830301   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.830508   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.830669   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.830821   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:24.912148   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:44:24.940337   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:44:24.964760   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:44:24.989172   63014 provision.go:86] duration metric: configureAuth took 375.052041ms
	I0229 18:44:24.989199   63014 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:44:24.989409   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:24.989435   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.989688   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.992106   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992563   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.992611   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.992974   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993154   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993340   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.993520   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.993692   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.993704   63014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:44:25.097791   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:44:25.097813   63014 buildroot.go:70] root file system type: tmpfs
	I0229 18:44:25.097929   63014 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:44:25.097947   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.100783   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.101236   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101447   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.101676   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.101861   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.102013   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.102184   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.102339   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.102416   63014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:44:25.226726   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:44:25.226753   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.229479   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.229789   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.229817   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.230008   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.230223   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230411   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230581   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.230775   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.230956   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.230980   63014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:44:22.520192   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:22.534228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:22.552116   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.552147   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:22.552192   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:22.574830   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.574867   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:22.574933   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:22.594718   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.594752   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:22.594810   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:22.615676   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.615711   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:22.615772   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:22.635359   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.635393   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:22.635455   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:22.655352   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.655381   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:22.655442   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:22.673481   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.673508   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:22.673562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:22.691542   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.691563   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:22.691573   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:22.691583   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:22.741934   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:22.741964   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:22.760644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:22.760681   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:22.838701   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:22.838724   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:22.838737   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:22.879863   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:22.879892   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.442546   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:25.456540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:25.476142   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.476168   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:25.476213   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:25.494185   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.494216   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:25.494275   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:25.517155   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.517187   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:25.517251   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:25.535776   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.535805   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:25.535864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:25.554255   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.554283   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:25.554326   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:25.571356   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.571383   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:25.571438   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:25.589129   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.589158   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:25.589218   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:25.607610   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.607654   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:25.607667   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:25.607683   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.669924   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:25.669954   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:21.145364   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:23.146563   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:25.146956   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:26.132356   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:44:26.132385   63014 machine.go:91] provisioned docker machine in 1.75205798s
	I0229 18:44:26.132402   63014 start.go:300] post-start starting for "newest-cni-555986" (driver="kvm2")
	I0229 18:44:26.132418   63014 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:44:26.132438   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.132741   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:44:26.132770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.135459   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135816   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.135839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135993   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.136198   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.136380   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.136509   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.220695   63014 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:44:26.225534   63014 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:44:26.225565   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:44:26.225648   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:44:26.225753   63014 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:44:26.225877   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:44:26.236218   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:26.260637   63014 start.go:303] post-start completed in 128.220021ms
	I0229 18:44:26.260663   63014 fix.go:56] fixHost completed within 20.546314149s
	I0229 18:44:26.260683   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.263403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263761   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.263791   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.264190   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264376   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264513   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.264704   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:26.264952   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:26.264972   63014 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:44:26.364534   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232266.337605764
	
	I0229 18:44:26.364556   63014 fix.go:206] guest clock: 1709232266.337605764
	I0229 18:44:26.364566   63014 fix.go:219] Guest: 2024-02-29 18:44:26.337605764 +0000 UTC Remote: 2024-02-29 18:44:26.260667088 +0000 UTC m=+20.709360868 (delta=76.938676ms)
	I0229 18:44:26.364589   63014 fix.go:190] guest clock delta is within tolerance: 76.938676ms
	I0229 18:44:26.364595   63014 start.go:83] releasing machines lock for "newest-cni-555986", held for 20.650256948s
	I0229 18:44:26.364617   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.364856   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:26.367497   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.367884   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.367914   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.368067   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368783   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368848   63014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:44:26.368893   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.369018   63014 ssh_runner.go:195] Run: cat /version.json
	I0229 18:44:26.369042   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.371814   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372058   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372159   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372329   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372406   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372429   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372561   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372642   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.372759   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372837   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.372910   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.373031   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.471860   63014 ssh_runner.go:195] Run: systemctl --version
	I0229 18:44:26.478160   63014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:44:26.483953   63014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:44:26.484004   63014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:44:26.501209   63014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:44:26.501232   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.501345   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.520439   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:44:26.532631   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:44:26.544776   63014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:44:26.544846   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:44:26.556908   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.571173   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:44:26.584793   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.599578   63014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:44:26.613065   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:44:26.625963   63014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:44:26.636208   63014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:44:26.647304   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:26.773666   63014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:44:26.805201   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.805282   63014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:44:26.828840   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.845685   63014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:44:26.864281   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.878719   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.891594   63014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:44:26.918028   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.932594   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.953389   63014 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:44:26.957403   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:44:26.966554   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:44:26.983908   63014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:44:27.099127   63014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:44:27.229263   63014 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:44:27.229402   63014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:44:27.248050   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:27.370928   63014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:44:28.846692   63014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.475728413s)
	I0229 18:44:28.846793   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:44:28.862710   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:28.876125   63014 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:44:28.990050   63014 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:44:29.111415   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.241702   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:44:29.259418   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:29.274090   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.405739   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:44:29.483337   63014 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:44:29.483415   63014 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:44:29.489731   63014 start.go:543] Will wait 60s for crictl version
	I0229 18:44:29.489807   63014 ssh_runner.go:195] Run: which crictl
	I0229 18:44:29.493965   63014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:44:29.551137   63014 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:44:29.551214   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.585366   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.616533   63014 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 18:44:29.616588   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:29.619293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619645   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:29.619671   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619927   63014 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:44:29.624040   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.638664   63014 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 18:44:29.640035   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:29.640131   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.661958   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.662001   63014 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:44:29.662060   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.681050   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.681077   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:29.681146   63014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:44:29.705900   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:29.705930   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:29.705950   63014 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 18:44:29.705973   63014 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.240 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-555986 NodeName:newest-cni-555986 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:44:29.706192   63014 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-555986"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:44:29.706334   63014 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-555986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:44:29.706410   63014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:44:29.717785   63014 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:44:29.717857   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:44:29.728573   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0229 18:44:29.746192   63014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:44:29.763094   63014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0229 18:44:29.780941   63014 ssh_runner.go:195] Run: grep 192.168.61.240	control-plane.minikube.internal$ /etc/hosts
	I0229 18:44:29.784664   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.796533   63014 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986 for IP: 192.168.61.240
	I0229 18:44:29.796569   63014 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:29.796698   63014 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:44:29.796746   63014 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:44:29.796809   63014 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/client.key
	I0229 18:44:29.796890   63014 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key.0e2de265
	I0229 18:44:29.796948   63014 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key
	I0229 18:44:29.797064   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:44:29.797094   63014 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:44:29.797103   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:44:29.797124   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:44:29.797154   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:44:29.797188   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:44:29.797243   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:29.797875   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:44:29.822101   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:44:29.847169   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:44:29.871405   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:44:29.898154   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:44:29.931310   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:44:29.957589   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:44:29.983801   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:44:30.011017   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:44:30.037607   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:44:30.067042   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:44:30.092561   63014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:44:30.111494   63014 ssh_runner.go:195] Run: openssl version
	I0229 18:44:30.117488   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:44:30.128877   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133493   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133540   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.139567   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:44:30.150842   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:44:30.161780   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166396   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166447   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.172649   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:44:30.183406   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:44:30.194175   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198677   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198732   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.204430   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:44:30.215298   63014 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:44:30.219939   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:44:30.225927   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:44:30.231724   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:44:30.237680   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:44:30.243550   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:44:30.249342   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:44:30.255106   63014 kubeadm.go:404] StartCluster: {Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false s
ystem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:30.255230   63014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:30.272612   63014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:44:30.283794   63014 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:44:30.283824   63014 kubeadm.go:636] restartCluster start
	I0229 18:44:30.283885   63014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:44:30.295185   63014 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:30.296063   63014 kubeconfig.go:135] verify returned: extract IP: "newest-cni-555986" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:30.296546   63014 kubeconfig.go:146] "newest-cni-555986" context is missing from /home/jenkins/minikube-integration/18259-6402/kubeconfig - will repair!
	I0229 18:44:30.297381   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:30.299196   63014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:44:30.309378   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.309439   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.322034   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:25.721765   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:25.721797   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:25.748884   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:25.748919   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:25.862593   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:25.862613   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:25.862627   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.412364   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:28.426168   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:28.444018   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.444048   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:28.444104   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:28.462393   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.462422   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:28.462481   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:28.480993   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.481021   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:28.481065   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:28.498930   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.498974   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:28.499034   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:28.517355   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.517386   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:28.517452   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:28.536493   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.536522   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:28.536629   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:28.554364   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.554392   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:28.554448   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:28.573203   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.573229   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:28.573241   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:28.573260   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:28.628788   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:28.628820   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:28.647595   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:28.647631   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:28.726195   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:28.726215   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:28.726228   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.783540   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:28.783575   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:27.147370   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:29.653339   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:30.810019   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.810100   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.822777   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.310338   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.310472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.324112   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.809551   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.809687   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.822657   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.310271   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.310348   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.324846   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.810460   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.810534   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.824072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.309541   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.309620   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.323749   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.810371   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.810472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.823564   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.309724   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.309805   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.322875   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.809427   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.809539   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.823871   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:35.310485   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.310554   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.324367   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.358413   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:31.374228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:31.392618   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.392649   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:31.392713   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:31.411406   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.411437   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:31.411497   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:31.431126   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.431157   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:31.431204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:31.451504   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.451531   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:31.451571   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:31.470318   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.470339   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:31.470388   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:31.489264   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.489289   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:31.489341   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:31.507636   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.507672   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:31.507730   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:31.526580   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.526602   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:31.526614   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:31.526634   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:31.568164   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:31.568199   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:31.627762   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:31.627786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:31.678480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:31.678514   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:31.695623   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:31.695659   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:31.793131   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.293320   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:34.307693   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:34.328775   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.328805   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:34.328863   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:34.347049   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.347075   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:34.347126   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:34.365903   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.365933   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:34.365993   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:34.383898   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.383932   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:34.383995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:34.402605   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.402632   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:34.402694   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:34.420889   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.420918   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:34.420976   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:34.439973   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.440000   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:34.440059   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:34.457452   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.457483   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:34.457496   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:34.457510   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:34.505134   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:34.505167   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:34.520181   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:34.520212   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:34.589435   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.589455   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:34.589466   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:34.634139   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:34.634168   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:32.149594   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:34.645888   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:35.809842   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.809911   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.823992   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.309548   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.309649   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.322861   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.810470   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.810541   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.824023   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.309492   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.309593   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.323072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.809581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.809688   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.822964   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.309476   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.309584   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.322909   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.810487   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.810602   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.824118   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.309581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.309683   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.323438   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.810045   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.810149   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.823071   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.309893   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:40.309956   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:40.326570   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.326600   63014 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:44:40.326612   63014 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:44:40.326684   63014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:40.350696   63014 docker.go:483] Stopping containers: [c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3]
	I0229 18:44:40.350775   63014 ssh_runner.go:195] Run: docker stop c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3
	I0229 18:44:40.379218   63014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:44:40.406202   63014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:40.418532   63014 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:40.418593   63014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430345   63014 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430371   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:40.561772   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:37.197653   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:37.211167   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:37.233259   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.233294   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:37.233349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:37.254237   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.254264   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:37.254322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:37.274320   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.274347   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:37.274401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:37.292854   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.292880   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:37.292929   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:37.310405   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.310429   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:37.310466   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:37.328374   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.328394   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:37.328434   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:37.345294   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.345321   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:37.345383   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:37.362743   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.362768   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:37.362779   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:37.362793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:37.410877   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:37.410914   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:37.425653   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:37.425689   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:37.490957   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:37.490981   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:37.490994   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:37.530316   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:37.530344   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:40.088251   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:40.102064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:40.121304   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.121338   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:40.121392   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:40.139634   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.139682   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:40.139742   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:40.156924   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.156950   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:40.156995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:40.174050   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.174076   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:40.174117   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:40.191417   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.191444   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:40.191488   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:40.209488   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.209515   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:40.209578   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:40.226753   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.226775   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:40.226828   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:40.244478   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.244505   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:40.244516   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:40.244526   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:40.299257   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:40.299293   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:40.316326   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:40.316356   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:40.407508   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:40.407531   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:40.407545   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:40.450989   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:40.451022   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:37.145550   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:39.645463   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:41.139942   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.337079   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.447658   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.519164   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:41.519271   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.020352   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.519558   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.020287   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.051672   63014 api_server.go:72] duration metric: took 1.532507495s to wait for apiserver process to appear ...
	I0229 18:44:43.051702   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:43.051723   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.052327   63014 api_server.go:269] stopped: https://192.168.61.240:8443/healthz: Get "https://192.168.61.240:8443/healthz": dial tcp 192.168.61.240:8443: connect: connection refused
	I0229 18:44:43.552797   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.024851   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.040954   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:43.067062   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.067087   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:43.067142   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:43.112898   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.112929   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:43.112987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:43.144432   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.144516   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:43.144577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:43.180141   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.180170   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:43.180217   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:43.203493   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.203521   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:43.203562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:43.227035   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.227065   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:43.227120   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:43.247867   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.247897   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:43.247959   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:43.269511   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.269538   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:43.269550   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:43.269566   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:43.287349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:43.287380   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:43.368033   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:43.368051   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:43.368062   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:43.425200   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:43.425235   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:43.492870   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:43.492906   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:41.648546   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:44.146476   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:46.415578   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.415614   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.415633   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.462403   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.462439   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.552650   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.559420   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:46.559454   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.052823   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.059079   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.059117   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.552719   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.561838   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.561869   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:48.052436   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:48.057072   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:48.064135   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:48.064164   63014 api_server.go:131] duration metric: took 5.012454851s to wait for apiserver health ...
	I0229 18:44:48.064173   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:48.064185   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:48.066074   63014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:44:48.067507   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:44:48.078593   63014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:44:48.102538   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:48.114933   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:48.114965   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114972   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114979   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:48.114985   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:48.114990   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:48.114995   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:44:48.115002   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:48.115006   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:48.115011   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:44:48.115017   63014 system_pods.go:74] duration metric: took 12.45428ms to wait for pod list to return data ...
	I0229 18:44:48.115024   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:48.118425   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:48.118453   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:48.118465   63014 node_conditions.go:105] duration metric: took 3.434927ms to run NodePressure ...
	I0229 18:44:48.118487   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:48.394218   63014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:44:48.407374   63014 ops.go:34] apiserver oom_adj: -16
	I0229 18:44:48.407397   63014 kubeadm.go:640] restartCluster took 18.123565128s
	I0229 18:44:48.407408   63014 kubeadm.go:406] StartCluster complete in 18.152305653s
	I0229 18:44:48.407427   63014 settings.go:142] acquiring lock: {Name:mk85324150508323d0a817853e472a1fdcadc314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.407503   63014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:48.408551   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.408794   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:44:48.408811   63014 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:44:48.408877   63014 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-555986"
	I0229 18:44:48.408884   63014 addons.go:69] Setting dashboard=true in profile "newest-cni-555986"
	I0229 18:44:48.408904   63014 addons.go:234] Setting addon dashboard=true in "newest-cni-555986"
	I0229 18:44:48.408910   63014 addons.go:69] Setting metrics-server=true in profile "newest-cni-555986"
	I0229 18:44:48.408925   63014 addons.go:234] Setting addon metrics-server=true in "newest-cni-555986"
	W0229 18:44:48.408930   63014 addons.go:243] addon dashboard should already be in state true
	W0229 18:44:48.408936   63014 addons.go:243] addon metrics-server should already be in state true
	I0229 18:44:48.408961   63014 addons.go:69] Setting default-storageclass=true in profile "newest-cni-555986"
	I0229 18:44:48.408905   63014 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-555986"
	I0229 18:44:48.408987   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	W0229 18:44:48.408996   63014 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:44:48.408999   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409016   63014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-555986"
	I0229 18:44:48.409070   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.408985   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.409048   63014 cache.go:107] acquiring lock: {Name:mk0db597c024ca72f3d806b204928d2d6d5c0ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:48.409212   63014 cache.go:115] /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 18:44:48.409221   63014 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 177.702µs
	I0229 18:44:48.409233   63014 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 18:44:48.409247   63014 cache.go:87] Successfully saved all images to host disk.
	I0229 18:44:48.409439   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409451   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409463   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409524   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409532   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409545   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409558   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409652   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409679   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409964   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.410023   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.414076   63014 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-555986" context rescaled to 1 replicas
	I0229 18:44:48.414110   63014 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:44:48.416128   63014 out.go:177] * Verifying Kubernetes components...
	I0229 18:44:48.417753   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:48.430067   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0229 18:44:48.430297   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0229 18:44:48.430412   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0229 18:44:48.430460   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0229 18:44:48.430866   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.430972   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431065   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431545   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431550   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431548   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431582   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431597   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431929   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.431972   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432206   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432253   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432290   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0229 18:44:48.432364   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.432382   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.432574   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.432606   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.432958   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432959   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.433540   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.433565   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.433650   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.434192   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.434219   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.434624   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.435113   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435154   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.435691   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435710   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.436522   63014 addons.go:234] Setting addon default-storageclass=true in "newest-cni-555986"
	W0229 18:44:48.436539   63014 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:44:48.436571   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.436949   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.436982   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.453519   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0229 18:44:48.453637   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0229 18:44:48.454123   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454220   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454725   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454745   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.454863   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454877   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.455157   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455208   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455283   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0229 18:44:48.455442   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455605   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455688   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.456149   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.456163   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.456470   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.456608   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.456786   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:48.456811   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.458869   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.461038   63014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 18:44:48.459183   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.460680   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.461477   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.462531   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 18:44:48.462548   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 18:44:48.462566   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.462647   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.462653   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.462678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.464438   63014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:44:48.462979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.465829   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.465902   63014 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.465920   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:44:48.465925   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.465937   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.466007   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I0229 18:44:48.466429   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.466991   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.467012   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.467180   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.467205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.467371   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.467432   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.467587   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.467594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.467770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.467913   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.469491   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.472294   63014 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 18:44:48.470549   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.470960   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.475176   63014 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 18:44:48.473898   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.474017   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.476581   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 18:44:48.476603   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 18:44:48.475264   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.475413   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.476620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.476878   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.477547   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0229 18:44:48.477887   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.478368   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.478381   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.478677   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.479096   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.479124   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.480199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480659   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.480684   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480955   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.481090   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.481242   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.481405   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.494480   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0229 18:44:48.494928   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.495370   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.495394   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.495667   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.495799   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.497441   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.497645   63014 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.497657   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:44:48.497667   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.500838   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.501380   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501593   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.501804   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.501963   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.502090   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.742737   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 18:44:48.742770   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 18:44:48.753599   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 18:44:48.753628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 18:44:48.765187   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.781474   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.837624   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 18:44:48.837655   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 18:44:48.847412   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 18:44:48.847440   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 18:44:48.878964   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:48.879048   63014 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:44:48.879052   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:48.879064   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:48.879082   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:48.879095   63014 cache_images.go:262] succeeded pushing to: newest-cni-555986
	I0229 18:44:48.879101   63014 cache_images.go:263] failed pushing to: 
	I0229 18:44:48.879122   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879135   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879510   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:48.879520   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879539   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.879565   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879876   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879907   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.945106   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 18:44:48.945130   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 18:44:48.946603   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:48.946628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 18:44:49.013179   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 18:44:49.013199   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 18:44:49.036118   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:49.122858   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 18:44:49.122892   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 18:44:49.215329   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 18:44:49.215361   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 18:44:49.228881   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.228905   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229150   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229175   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.229199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.229245   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.229262   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229590   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229607   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.236908   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.236931   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.237194   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.237213   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.237232   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.313570   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 18:44:49.313605   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 18:44:49.375520   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 18:44:49.375549   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 18:44:49.445233   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:49.445262   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 18:44:49.520309   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:50.293009   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.511487012s)
	I0229 18:44:50.293056   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293069   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293082   63014 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.413998817s)
	I0229 18:44:50.293122   63014 api_server.go:72] duration metric: took 1.878985811s to wait for apiserver process to appear ...
	I0229 18:44:50.293139   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:50.293159   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:50.293390   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293444   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293454   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.293472   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293745   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293858   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293880   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.300808   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:50.303536   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:50.303558   63014 api_server.go:131] duration metric: took 10.411694ms to wait for apiserver health ...
	I0229 18:44:50.303569   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:50.310252   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:50.310280   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310290   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310298   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:50.310307   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:50.310316   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:50.310335   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running
	I0229 18:44:50.310343   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:50.310356   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:50.310365   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running
	I0229 18:44:50.310376   63014 system_pods.go:74] duration metric: took 6.800137ms to wait for pod list to return data ...
	I0229 18:44:50.310386   63014 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:44:50.313209   63014 default_sa.go:45] found service account: "default"
	I0229 18:44:50.313231   63014 default_sa.go:55] duration metric: took 2.835138ms for default service account to be created ...
	I0229 18:44:50.313244   63014 kubeadm.go:581] duration metric: took 1.899107276s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 18:44:50.313262   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:50.315732   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:50.315752   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:50.315765   63014 node_conditions.go:105] duration metric: took 2.49465ms to run NodePressure ...
	I0229 18:44:50.315778   63014 start.go:228] waiting for startup goroutines ...
	I0229 18:44:50.412181   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.376016712s)
	I0229 18:44:50.412237   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412253   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412517   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412562   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412602   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412620   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412632   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412844   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412879   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412886   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412909   63014 addons.go:470] Verifying addon metrics-server=true in "newest-cni-555986"
	I0229 18:44:50.642086   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.121716457s)
	I0229 18:44:50.642146   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642162   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642465   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642487   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642498   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642506   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642526   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.642764   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642774   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642777   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.644564   63014 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-555986 addons enable metrics-server
	
	I0229 18:44:50.646195   63014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 18:44:46.045085   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:46.060842   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:46.080115   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.080151   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:46.080204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:46.098951   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.098977   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:46.099045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:46.117884   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.117914   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:46.117962   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:46.135090   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.135122   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:46.135183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:46.154068   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.154094   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:46.154150   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:46.175259   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.175291   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:46.175348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:46.199979   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.200010   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:46.200073   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:46.219082   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.219109   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:46.219118   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:46.219129   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:46.285752   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:46.285802   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:46.362896   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:46.362923   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:46.424465   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:46.424496   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:46.440644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:46.440676   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:46.516207   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.017356   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:49.036558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:49.062037   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.062073   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:49.062122   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:49.089359   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.089383   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:49.089436   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:49.112366   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.112397   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:49.112447   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:49.135268   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.135300   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:49.135357   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:49.158768   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.158795   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:49.158862   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:49.182032   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.182056   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:49.182100   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:49.202844   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.202880   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:49.202937   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:49.223496   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.223522   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:49.223533   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:49.223548   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:49.283784   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:49.283833   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:49.299408   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:49.299450   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:49.381751   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.381777   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:49.381793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:49.425633   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:49.425671   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:50.647633   63014 addons.go:505] enable addons completed in 2.238822444s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 18:44:50.647682   63014 start.go:233] waiting for cluster config update ...
	I0229 18:44:50.647711   63014 start.go:242] writing updated cluster config ...
	I0229 18:44:50.648039   63014 ssh_runner.go:195] Run: rm -f paused
	I0229 18:44:50.699121   63014 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:44:50.700743   63014 out.go:177] * Done! kubectl is now configured to use "newest-cni-555986" cluster and "default" namespace by default
	I0229 18:44:46.147159   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:48.147947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:50.646890   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:51.992923   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:52.009101   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:52.030751   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.030778   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:52.030834   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:52.051175   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.051205   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:52.051258   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:52.070270   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.070292   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:52.070346   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:52.089729   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.089755   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:52.089807   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:52.109158   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.109181   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:52.109235   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:52.127440   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.127464   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:52.127509   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:52.146458   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.146485   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:52.146542   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:52.164899   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.164925   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:52.164934   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:52.164944   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:52.223827   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:52.223870   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:52.245832   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:52.245869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:52.350010   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:52.350037   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:52.350051   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:52.400763   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:52.400792   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:54.965688   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:54.984737   61028 kubeadm.go:640] restartCluster took 4m13.179905747s
	W0229 18:44:54.984813   61028 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 18:44:54.984842   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:44:55.440354   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:55.456286   61028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:55.467480   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:55.478159   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:55.478205   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:44:55.539798   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:44:55.539888   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:44:53.148909   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.149846   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.752087   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:44:55.752264   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:44:55.752401   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:44:55.906569   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:44:55.907774   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:44:55.917392   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:44:56.046677   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:44:56.048655   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:44:56.048771   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:44:56.048874   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:44:56.048992   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:44:56.052691   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:44:56.052805   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:44:56.052890   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:44:56.052984   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:44:56.053096   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:44:56.053215   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:44:56.053320   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:44:56.053379   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:44:56.053475   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:44:56.176574   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:44:56.329888   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:44:56.623253   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:44:56.722273   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:44:56.723020   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:44:56.724880   61028 out.go:204]   - Booting up control plane ...
	I0229 18:44:56.725005   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:44:56.730320   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:44:56.731630   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:44:56.732332   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:44:56.734500   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:44:57.646118   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:59.648032   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:02.144840   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:04.145112   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:06.146649   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:08.647051   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:11.148318   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:13.646816   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:16.145165   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:18.146437   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:20.147686   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:22.645925   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:25.146444   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:27.645765   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:29.646621   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:31.647146   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:34.145657   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:36.735482   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:45:36.736181   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:36.736433   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:36.145891   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:38.149811   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:40.646401   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:41.737158   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:41.737332   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:43.145942   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:45.146786   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:47.648714   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:50.145240   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:51.737722   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:51.737923   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:52.145341   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:54.145559   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:56.646087   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:58.646249   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:00.646466   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:02.647293   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:05.146452   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:07.646128   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:10.147008   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:11.738541   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:11.738773   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:12.646406   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:14.647319   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:17.146097   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:19.146615   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:21.147384   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:23.646155   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:25.647369   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:28.146558   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:30.645408   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:32.649260   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:34.650076   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:37.146414   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:39.146947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:41.645903   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:43.646016   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:45.646056   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:47.646659   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:49.647440   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:51.739942   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:51.740223   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:51.740253   61028 kubeadm.go:322] 
	I0229 18:46:51.740302   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:51.740342   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:51.740349   61028 kubeadm.go:322] 
	I0229 18:46:51.740377   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:51.740404   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:51.740528   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:51.740544   61028 kubeadm.go:322] 
	I0229 18:46:51.740646   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:51.740675   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:51.740726   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:51.740736   61028 kubeadm.go:322] 
	I0229 18:46:51.740844   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:51.740950   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:51.741029   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:51.741103   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:51.741204   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:51.741261   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:51.742036   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:46:51.742190   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:46:51.742337   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:46:51.742464   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:51.742640   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:51.742725   61028 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:46:51.742786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:46:52.197144   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:52.214163   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:46:52.226374   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:46:52.226416   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:46:52.285152   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:52.285314   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:52.500283   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:52.500430   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:52.500558   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:52.672731   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:52.672847   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:52.681682   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:52.809851   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:52.811832   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:52.811937   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:52.812027   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:52.812099   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:52.812153   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:52.812252   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:52.812333   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:52.812427   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:52.812513   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:52.812652   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:52.813069   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:52.813244   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:52.813324   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:52.931955   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:53.294257   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:53.376114   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:53.620085   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:53.620974   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:53.622696   61028 out.go:204]   - Booting up control plane ...
	I0229 18:46:53.622772   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:53.627326   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:53.628386   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:53.629224   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:53.632638   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:46:52.145625   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:54.146306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:56.146385   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:58.649533   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:01.145784   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:03.648061   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:06.145955   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:07.645834   60121 pod_ready.go:81] duration metric: took 4m0.007156334s waiting for pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace to be "Ready" ...
	E0229 18:47:07.645859   60121 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 18:47:07.645869   60121 pod_ready.go:38] duration metric: took 4m1.184866089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:47:07.645887   60121 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:47:07.645945   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:07.671520   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:07.671613   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:07.691503   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:07.691571   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:07.710557   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:07.710627   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:07.730780   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:07.730868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:07.749894   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:07.749981   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:07.772545   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:07.772620   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:07.791523   60121 logs.go:276] 0 containers: []
	W0229 18:47:07.791554   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:07.791604   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:07.812744   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:07.812833   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:07.831469   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:07.831505   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:07.831515   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:07.904596   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:07.904778   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:07.929197   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:07.929234   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:07.965399   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:07.965430   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:07.997552   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:07.997582   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:08.043918   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:08.043954   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:08.068540   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:08.068569   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:08.093297   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:08.093326   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:08.160393   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:08.160432   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:08.234099   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:08.234128   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:08.249381   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:08.249406   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:08.411423   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:08.411457   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:08.440486   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:08.440516   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:08.474207   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:08.474320   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:08.498143   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498169   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:08.498225   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:08.498241   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:08.498252   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:08.498266   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498277   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:18.499396   60121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:47:18.517660   60121 api_server.go:72] duration metric: took 4m15.022647547s to wait for apiserver process to appear ...
	I0229 18:47:18.517688   60121 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:47:18.517766   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:18.542263   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:18.542333   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:18.565885   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:18.565964   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:18.585135   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:18.585213   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:18.605789   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:18.605850   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:18.624993   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:18.625062   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:18.648049   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:18.648118   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:18.668689   60121 logs.go:276] 0 containers: []
	W0229 18:47:18.668713   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:18.668759   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:18.691741   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:18.691813   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:18.713776   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:18.713810   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:18.713823   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:18.781369   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:18.781564   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:18.808924   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:18.808965   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:18.824723   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:18.824756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:18.854531   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:18.854576   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:18.897618   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:18.897650   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:18.936914   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:18.936946   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:19.011250   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:19.011280   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:19.075817   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:19.075850   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:19.200261   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:19.200299   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:19.236988   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:19.237015   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:19.269721   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:19.269750   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:19.296918   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:19.296944   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:19.319721   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:19.319753   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:19.342330   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342355   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:19.342410   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:19.342423   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:19.342429   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:19.342437   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342447   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:29.343918   60121 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8444/healthz ...
	I0229 18:47:29.350861   60121 api_server.go:279] https://192.168.39.148:8444/healthz returned 200:
	ok
	I0229 18:47:29.352541   60121 api_server.go:141] control plane version: v1.28.4
	I0229 18:47:29.352560   60121 api_server.go:131] duration metric: took 10.834865386s to wait for apiserver health ...
	I0229 18:47:29.352569   60121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:47:29.352633   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:29.373466   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:29.373535   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:29.394287   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:29.394375   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:29.415331   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:29.415410   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:29.436682   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:29.436764   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:29.456935   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:29.457003   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:29.475799   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:29.475868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:29.496876   60121 logs.go:276] 0 containers: []
	W0229 18:47:29.496904   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:29.496963   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:29.516724   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:29.516794   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:29.535652   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:29.535683   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:29.535693   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:29.559535   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:29.559563   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:29.587928   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:29.587952   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:29.610085   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:29.610111   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:29.673987   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:29.674033   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:29.751324   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:29.751355   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:29.876322   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:29.876347   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:29.900325   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:29.900349   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:29.936137   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:29.936167   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:29.969468   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:29.969499   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:30.017539   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:30.017587   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:30.093486   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.093682   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.124169   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:30.124211   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:30.140725   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:30.140756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:30.174590   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174628   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:30.174694   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:30.174708   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.174715   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.174726   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174731   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:33.634399   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:47:33.635096   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:33.635349   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:40.185191   60121 system_pods.go:59] 8 kube-system pods found
	I0229 18:47:40.185222   60121 system_pods.go:61] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.185227   60121 system_pods.go:61] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.185232   60121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.185235   60121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.185238   60121 system_pods.go:61] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.185241   60121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.185247   60121 system_pods.go:61] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.185251   60121 system_pods.go:61] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.185257   60121 system_pods.go:74] duration metric: took 10.832681757s to wait for pod list to return data ...
	I0229 18:47:40.185264   60121 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:47:40.188055   60121 default_sa.go:45] found service account: "default"
	I0229 18:47:40.188075   60121 default_sa.go:55] duration metric: took 2.8056ms for default service account to be created ...
	I0229 18:47:40.188083   60121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:47:40.199288   60121 system_pods.go:86] 8 kube-system pods found
	I0229 18:47:40.199317   60121 system_pods.go:89] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.199325   60121 system_pods.go:89] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.199330   60121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.199335   60121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.199340   60121 system_pods.go:89] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.199347   60121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.199359   60121 system_pods.go:89] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.199369   60121 system_pods.go:89] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.199383   60121 system_pods.go:126] duration metric: took 11.294328ms to wait for k8s-apps to be running ...
	I0229 18:47:40.199394   60121 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:47:40.199452   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:47:40.216650   60121 system_svc.go:56] duration metric: took 17.247343ms WaitForService to wait for kubelet.
	I0229 18:47:40.216679   60121 kubeadm.go:581] duration metric: took 4m36.72166867s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:47:40.216705   60121 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:47:40.220111   60121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:47:40.220142   60121 node_conditions.go:123] node cpu capacity is 2
	I0229 18:47:40.220157   60121 node_conditions.go:105] duration metric: took 3.446433ms to run NodePressure ...
	I0229 18:47:40.220172   60121 start.go:228] waiting for startup goroutines ...
	I0229 18:47:40.220180   60121 start.go:233] waiting for cluster config update ...
	I0229 18:47:40.220193   60121 start.go:242] writing updated cluster config ...
	I0229 18:47:40.220531   60121 ssh_runner.go:195] Run: rm -f paused
	I0229 18:47:40.268347   60121 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:47:40.270302   60121 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-270866" cluster and "default" namespace by default
	I0229 18:47:38.635813   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:38.636020   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:48.636649   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:48.636873   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:08.637971   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:08.638214   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639456   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:48.639757   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639779   61028 kubeadm.go:322] 
	I0229 18:48:48.639840   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:48:48.639924   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:48:48.639950   61028 kubeadm.go:322] 
	I0229 18:48:48.640004   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:48:48.640046   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:48:48.640168   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:48:48.640178   61028 kubeadm.go:322] 
	I0229 18:48:48.640273   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:48:48.640313   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:48:48.640347   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:48:48.640353   61028 kubeadm.go:322] 
	I0229 18:48:48.640439   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:48:48.640559   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:48:48.640671   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:48:48.640752   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:48:48.640864   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:48:48.640919   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:48:48.641703   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:48:48.641878   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:48:48.641968   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:48:48.642071   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:48:48.642249   61028 kubeadm.go:406] StartCluster complete in 8m6.867140018s
	I0229 18:48:48.642265   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:48:48.642322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:48:48.674320   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.674348   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:48:48.674398   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:48:48.695124   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.695148   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:48:48.695190   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:48:48.712218   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.712245   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:48:48.712299   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:48:48.730912   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.730939   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:48:48.730982   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:48:48.748542   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.748576   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:48:48.748622   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:48:48.765544   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.765570   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:48:48.765623   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:48:48.791193   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.791238   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:48:48.791296   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:48:48.813084   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.813119   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:48:48.813132   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:48:48.813144   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:48:48.834348   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:48:48.834373   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:48:48.911451   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:48:48.911473   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:48:48.911485   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:48:48.954088   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:48:48.954119   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:48:49.019061   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:48:49.019092   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:48:49.067347   61028 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:48:49.067396   61028 out.go:239] * 
	W0229 18:48:49.067456   61028 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.067477   61028 out.go:239] * 
	W0229 18:48:49.068210   61028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:48:49.072114   61028 out.go:177] 
	W0229 18:48:49.073581   61028 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.073626   61028 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:48:49.073649   61028 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:48:49.075293   61028 out.go:177] 
	
	
	==> Docker <==
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050425153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050467385Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050514780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050552148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050590447Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050660627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050699694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050735468Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050781822Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050897158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051019076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051064571Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051441623Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051565243Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051659095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051747686Z" level=info msg="containerd successfully booted in 0.034113s"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.252862682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.297343935Z" level=info msg="Loading containers: start."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.417489065Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.467932343Z" level=info msg="Loading containers: done."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482234448Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482355814Z" level=info msg="Daemon has completed initialization"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.517930017Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.518369987Z" level=info msg="API listen on [::]:2376"
	Feb 29 18:40:40 old-k8s-version-467811 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2024-02-29T18:57:51Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056516] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.804914] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.680946] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.687335] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.061500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060694] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +1.140707] systemd-fstab-generator[780]: Ignoring "noauto" option for root device
	[  +0.360984] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.131688] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.149280] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +5.508694] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.066369] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.235011] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.074300] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9503]: Ignoring "noauto" option for root device
	[  +0.067712] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 18:46] systemd-fstab-generator[11264]: Ignoring "noauto" option for root device
	[  +0.072343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:57:51 up 17 min,  0 users,  load average: 0.00, 0.05, 0.10
	Linux old-k8s-version-467811 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: I0229 18:57:50.321762   20541 server.go:410] Version: v1.16.0
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: I0229 18:57:50.322022   20541 plugins.go:100] No cloud provider specified.
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: I0229 18:57:50.322034   20541 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: I0229 18:57:50.324184   20541 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: W0229 18:57:50.325106   20541 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: W0229 18:57:50.325236   20541 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 18:57:50 old-k8s-version-467811 kubelet[20541]: F0229 18:57:50.325294   20541 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 877.
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:57:50 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: I0229 18:57:51.082537   20562 server.go:410] Version: v1.16.0
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: I0229 18:57:51.082750   20562 plugins.go:100] No cloud provider specified.
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: I0229 18:57:51.082761   20562 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: I0229 18:57:51.088170   20562 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: W0229 18:57:51.090897   20562 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: W0229 18:57:51.093805   20562 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 18:57:51 old-k8s-version-467811 kubelet[20562]: F0229 18:57:51.094042   20562 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 18:57:51 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 18:57:51 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 18:57:51 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 878.
	Feb 29 18:57:51 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 18:57:51 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (229.483683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:58:01.104520   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:58:08.805578   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:58:24.374495   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:58:27.022564   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:58:32.449434   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 18:59:28.710829   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:00:09.380028   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:00:17.206456   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:00:18.987240   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:00:23.103500   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:00:23.977182   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:01:00.470065   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:01:18.620074   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:01:36.998617   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:01:57.968165   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/default-k8s-diff-port-270866/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:02:06.913277   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:03:00.044675   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/no-preload-580872/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:03:01.105239   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:03:08.804725   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:03:24.374298   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
E0229 19:03:32.449813   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.30:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.30:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (243.914008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-467811" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-467811 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-467811 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.327µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-467811 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (226.470407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-467811 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-154269 image list                          | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p embed-certs-154269                                  | embed-certs-154269           | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | no-preload-580872 image list                           | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| delete  | -p no-preload-580872                                   | no-preload-580872            | jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	| addons  | enable metrics-server -p newest-cni-555986             | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:43 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-555986                  | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-555986 --memory=2200 --alsologtostderr   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.29.0-rc.2       |                              |         |         |                     |                     |
	| image   | newest-cni-555986 image list                           | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| delete  | -p newest-cni-555986                                   | newest-cni-555986            | jenkins | v1.32.0 | 29 Feb 24 18:44 UTC | 29 Feb 24 18:44 UTC |
	| image   | default-k8s-diff-port-270866                           | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-270866 | jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	|         | default-k8s-diff-port-270866                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:44:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:44:05.607270   63014 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:44:05.607394   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607403   63014 out.go:304] Setting ErrFile to fd 2...
	I0229 18:44:05.607407   63014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:44:05.607676   63014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:44:05.608237   63014 out.go:298] Setting JSON to false
	I0229 18:44:05.609156   63014 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5196,"bootTime":1709227050,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 18:44:05.609218   63014 start.go:139] virtualization: kvm guest
	I0229 18:44:05.611560   63014 out.go:177] * [newest-cni-555986] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 18:44:05.613001   63014 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:44:05.614331   63014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:44:05.612955   63014 notify.go:220] Checking for updates...
	I0229 18:44:05.617084   63014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:05.618405   63014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 18:44:05.619690   63014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 18:44:05.620981   63014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:44:01.997181   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:02.011206   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:02.030099   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.030125   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:02.030173   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:02.048060   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.048086   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:02.048144   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:02.066190   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.066220   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:02.066284   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:02.085484   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.085509   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:02.085568   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:02.109533   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.109559   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:02.109615   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:02.131800   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.131822   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:02.131864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:02.151122   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.151154   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:02.151208   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:02.171811   61028 logs.go:276] 0 containers: []
	W0229 18:44:02.171846   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:02.171859   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:02.171873   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:02.216251   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:02.216284   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:02.276667   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:02.276698   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:02.328533   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:02.328564   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:02.344290   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:02.344329   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:02.414487   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:04.915506   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:04.930595   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:04.949852   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.949885   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:04.949943   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:04.968164   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.968193   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:04.968252   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:04.987171   61028 logs.go:276] 0 containers: []
	W0229 18:44:04.987196   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:04.987241   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:05.004487   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.004517   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:05.004575   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:05.022570   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.022604   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:05.022659   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:05.040454   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.040481   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:05.040540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:05.061471   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.061502   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:05.061558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:05.079346   61028 logs.go:276] 0 containers: []
	W0229 18:44:05.079377   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:05.079389   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:05.079404   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:05.093664   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:05.093691   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:05.164031   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:05.164048   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:05.164058   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:05.207561   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:05.207596   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:05.263450   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:05.263484   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:05.622668   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:05.623031   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.623066   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.638058   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44823
	I0229 18:44:05.638482   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.638964   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.638985   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.639298   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.639500   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.639802   63014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:44:05.640142   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.640184   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.654483   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0229 18:44:05.654869   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.655391   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.655411   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.655711   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.655946   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.692636   63014 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 18:44:05.694074   63014 start.go:299] selected driver: kvm2
	I0229 18:44:05.694084   63014 start.go:903] validating driver "kvm2" against &{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false
node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.694190   63014 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:44:05.694807   63014 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.694873   63014 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 18:44:05.709500   63014 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 18:44:05.710380   63014 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0229 18:44:05.710470   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:05.710493   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:05.710517   63014 start_flags.go:323] config:
	{Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:05.710788   63014 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:05.712665   63014 out.go:177] * Starting control plane node newest-cni-555986 in cluster newest-cni-555986
	I0229 18:44:03.148306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.151204   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:05.713933   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:05.713962   63014 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 18:44:05.713970   63014 cache.go:56] Caching tarball of preloaded images
	I0229 18:44:05.714027   63014 preload.go:174] Found /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:44:05.714037   63014 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 18:44:05.714127   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:05.714292   63014 start.go:365] acquiring machines lock for newest-cni-555986: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:44:05.714330   63014 start.go:369] acquired machines lock for "newest-cni-555986" in 19.249µs
	I0229 18:44:05.714342   63014 start.go:96] Skipping create...Using existing machine configuration
	I0229 18:44:05.714349   63014 fix.go:54] fixHost starting: 
	I0229 18:44:05.714583   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:05.714604   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:05.728926   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0229 18:44:05.729416   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:05.729927   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:05.729954   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:05.730372   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:05.730554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:05.730711   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:05.732365   63014 fix.go:102] recreateIfNeeded on newest-cni-555986: state=Stopped err=<nil>
	I0229 18:44:05.732405   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	W0229 18:44:05.732559   63014 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 18:44:05.734332   63014 out.go:177] * Restarting existing kvm2 VM for "newest-cni-555986" ...
	I0229 18:44:05.735801   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Start
	I0229 18:44:05.736011   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring networks are active...
	I0229 18:44:05.736741   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network default is active
	I0229 18:44:05.737082   63014 main.go:141] libmachine: (newest-cni-555986) Ensuring network mk-newest-cni-555986 is active
	I0229 18:44:05.737422   63014 main.go:141] libmachine: (newest-cni-555986) Getting domain xml...
	I0229 18:44:05.738474   63014 main.go:141] libmachine: (newest-cni-555986) Creating domain...
	I0229 18:44:06.970960   63014 main.go:141] libmachine: (newest-cni-555986) Waiting to get IP...
	I0229 18:44:06.971959   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:06.972427   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:06.972494   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:06.972409   63049 retry.go:31] will retry after 191.930654ms: waiting for machine to come up
	I0229 18:44:07.165902   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.166504   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.166542   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.166425   63049 retry.go:31] will retry after 380.972246ms: waiting for machine to come up
	I0229 18:44:07.549044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.549505   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.549533   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.549448   63049 retry.go:31] will retry after 409.460218ms: waiting for machine to come up
	I0229 18:44:07.960093   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:07.960729   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:07.960764   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:07.960680   63049 retry.go:31] will retry after 494.525541ms: waiting for machine to come up
	I0229 18:44:08.456512   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:08.457044   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:08.457070   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:08.457006   63049 retry.go:31] will retry after 702.742264ms: waiting for machine to come up
	I0229 18:44:09.160839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.161340   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.161399   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.161277   63049 retry.go:31] will retry after 791.133205ms: waiting for machine to come up
	I0229 18:44:09.953571   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:09.954234   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:09.954266   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:09.954187   63049 retry.go:31] will retry after 1.026362572s: waiting for machine to come up
	I0229 18:44:07.813986   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:07.834016   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:07.856292   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.856330   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:07.856390   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:07.874903   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.874933   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:07.874988   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:07.893822   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.893849   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:07.893904   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:07.911815   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.911840   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:07.911896   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:07.930733   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.930763   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:07.930821   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:07.950028   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.950062   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:07.950118   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:07.969192   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.969219   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:07.969281   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:07.988711   61028 logs.go:276] 0 containers: []
	W0229 18:44:07.988733   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:07.988742   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:07.988752   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:08.031566   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:08.031601   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:08.091610   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:08.091651   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:08.143480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:08.143515   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:08.159139   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:08.159166   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:08.238088   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:07.647412   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:09.648220   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:10.982639   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:10.983122   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:10.983154   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:10.983063   63049 retry.go:31] will retry after 1.165405321s: waiting for machine to come up
	I0229 18:44:12.150037   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:12.150578   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:12.150613   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:12.150537   63049 retry.go:31] will retry after 1.52706972s: waiting for machine to come up
	I0229 18:44:13.680375   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:13.680960   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:13.680989   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:13.680906   63049 retry.go:31] will retry after 1.671273511s: waiting for machine to come up
	I0229 18:44:15.354871   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:15.355467   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:15.355498   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:15.355404   63049 retry.go:31] will retry after 2.220860221s: waiting for machine to come up
	I0229 18:44:10.738478   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:10.756305   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:10.780161   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.780191   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:10.780244   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:10.799891   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.799921   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:10.799981   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:10.815310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.815340   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:10.815401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:10.843908   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.843934   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:10.843996   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:10.864272   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.864295   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:10.864349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:10.882310   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.882336   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:10.882407   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:10.899979   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.900006   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:10.900064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:10.917343   61028 logs.go:276] 0 containers: []
	W0229 18:44:10.917373   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:10.917385   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:10.917399   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:10.970492   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:10.970529   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:10.985824   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:10.985850   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:11.063258   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:11.063281   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:11.063296   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:11.106836   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:11.106866   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:13.671084   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:13.685411   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:13.705142   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.705173   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:13.705234   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:13.724509   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.724548   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:13.724614   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:13.744230   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.744280   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:13.744348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:13.769730   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.769759   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:13.769817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:13.799466   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.799496   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:13.799556   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:13.820793   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.820823   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:13.820887   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:13.850052   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.850082   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:13.850138   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:13.874449   61028 logs.go:276] 0 containers: []
	W0229 18:44:13.874477   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:13.874489   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:13.874504   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:13.932481   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:13.932513   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:13.947628   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:13.947677   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:14.018240   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:14.018263   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:14.018286   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:14.059187   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:14.059217   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:12.145489   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:14.145878   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:17.577867   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:17.578465   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:17.578495   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:17.578412   63049 retry.go:31] will retry after 2.588260964s: waiting for machine to come up
	I0229 18:44:20.170174   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:20.170629   63014 main.go:141] libmachine: (newest-cni-555986) DBG | unable to find current IP address of domain newest-cni-555986 in network mk-newest-cni-555986
	I0229 18:44:20.170654   63014 main.go:141] libmachine: (newest-cni-555986) DBG | I0229 18:44:20.170589   63049 retry.go:31] will retry after 4.074488221s: waiting for machine to come up
	I0229 18:44:16.633510   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:16.652639   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:16.673532   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.673566   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:16.673618   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:16.691920   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.691945   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:16.692006   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:16.709420   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.709443   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:16.709484   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:16.727650   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.727681   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:16.727734   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:16.746267   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.746293   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:16.746344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:16.774818   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.774849   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:16.774900   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:16.799617   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.799650   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:16.799704   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:16.820466   61028 logs.go:276] 0 containers: []
	W0229 18:44:16.820501   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:16.820515   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:16.820528   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:16.887246   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:16.887289   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:16.902847   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:16.902872   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:16.980952   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:16.980973   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:16.980990   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:17.026066   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:17.026101   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:19.597286   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:19.613257   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:19.630212   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.630243   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:19.630298   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:19.647871   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.647899   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:19.647953   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:19.664725   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.664760   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:19.664817   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:19.682528   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.682560   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:19.682617   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:19.700820   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.700850   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:19.700917   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:19.718645   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.718673   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:19.718736   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:19.737246   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.737289   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:19.737344   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:19.754748   61028 logs.go:276] 0 containers: []
	W0229 18:44:19.754776   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:19.754793   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:19.754805   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:19.809195   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:19.809230   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:19.830327   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:19.830365   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:19.918269   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:19.918296   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:19.918313   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:19.960393   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:19.960425   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:16.146999   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:18.646605   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:24.249123   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249536   63014 main.go:141] libmachine: (newest-cni-555986) Found IP for machine: 192.168.61.240
	I0229 18:44:24.249570   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has current primary IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.249577   63014 main.go:141] libmachine: (newest-cni-555986) Reserving static IP address...
	I0229 18:44:24.249960   63014 main.go:141] libmachine: (newest-cni-555986) Reserved static IP address: 192.168.61.240
	I0229 18:44:24.249990   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.250000   63014 main.go:141] libmachine: (newest-cni-555986) Waiting for SSH to be available...
	I0229 18:44:24.250017   63014 main.go:141] libmachine: (newest-cni-555986) DBG | skip adding static IP to network mk-newest-cni-555986 - found existing host DHCP lease matching {name: "newest-cni-555986", mac: "52:54:00:9b:53:df", ip: "192.168.61.240"}
	I0229 18:44:24.250026   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Getting to WaitForSSH function...
	I0229 18:44:24.251971   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252153   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.252193   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.252293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH client type: external
	I0229 18:44:24.252326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa (-rw-------)
	I0229 18:44:24.252368   63014 main.go:141] libmachine: (newest-cni-555986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0229 18:44:24.252384   63014 main.go:141] libmachine: (newest-cni-555986) DBG | About to run SSH command:
	I0229 18:44:24.252417   63014 main.go:141] libmachine: (newest-cni-555986) DBG | exit 0
	I0229 18:44:24.375769   63014 main.go:141] libmachine: (newest-cni-555986) DBG | SSH cmd err, output: <nil>: 
	I0229 18:44:24.376112   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetConfigRaw
	I0229 18:44:24.376787   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.379469   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.379875   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.379924   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.380139   63014 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/config.json ...
	I0229 18:44:24.380315   63014 machine.go:88] provisioning docker machine ...
	I0229 18:44:24.380331   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.380554   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380737   63014 buildroot.go:166] provisioning hostname "newest-cni-555986"
	I0229 18:44:24.380758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.380942   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.383071   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383373   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.383403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.383495   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.383671   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383843   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.383976   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.384136   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.384337   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.384352   63014 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-555986 && echo "newest-cni-555986" | sudo tee /etc/hostname
	I0229 18:44:24.498766   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-555986
	
	I0229 18:44:24.498797   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.501346   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.501704   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.501941   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.502122   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502289   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.502432   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.502647   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.502863   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.502893   63014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-555986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-555986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-555986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:44:24.614045   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:44:24.614077   63014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
	I0229 18:44:24.614100   63014 buildroot.go:174] setting up certificates
	I0229 18:44:24.614109   63014 provision.go:83] configureAuth start
	I0229 18:44:24.614117   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetMachineName
	I0229 18:44:24.614363   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:24.616878   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617257   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.617279   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.617476   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.619950   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620245   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.620267   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.620394   63014 provision.go:138] copyHostCerts
	I0229 18:44:24.620452   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
	I0229 18:44:24.620464   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
	I0229 18:44:24.620556   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
	I0229 18:44:24.620684   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
	I0229 18:44:24.620696   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
	I0229 18:44:24.620741   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
	I0229 18:44:24.620804   63014 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
	I0229 18:44:24.620813   63014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
	I0229 18:44:24.620834   63014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
	I0229 18:44:24.620882   63014 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.newest-cni-555986 san=[192.168.61.240 192.168.61.240 localhost 127.0.0.1 minikube newest-cni-555986]
	I0229 18:44:24.827181   63014 provision.go:172] copyRemoteCerts
	I0229 18:44:24.827251   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:44:24.827279   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.829858   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.830156   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.830301   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.830508   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.830669   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.830821   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:24.912148   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:44:24.940337   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 18:44:24.964760   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:44:24.989172   63014 provision.go:86] duration metric: configureAuth took 375.052041ms
	I0229 18:44:24.989199   63014 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:44:24.989409   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:24.989435   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:24.989688   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:24.992106   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992563   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:24.992611   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:24.992758   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:24.992974   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993154   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:24.993340   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:24.993520   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:24.993692   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:24.993704   63014 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:44:25.097791   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:44:25.097813   63014 buildroot.go:70] root file system type: tmpfs
	I0229 18:44:25.097929   63014 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:44:25.097947   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.100783   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.101236   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.101447   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.101676   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.101861   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.102013   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.102184   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.102339   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.102416   63014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:44:25.226726   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:44:25.226753   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:25.229479   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.229789   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:25.229817   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:25.230008   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:25.230223   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230411   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:25.230581   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:25.230775   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:25.230956   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:25.230980   63014 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:44:22.520192   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:22.534228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:22.552116   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.552147   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:22.552192   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:22.574830   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.574867   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:22.574933   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:22.594718   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.594752   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:22.594810   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:22.615676   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.615711   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:22.615772   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:22.635359   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.635393   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:22.635455   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:22.655352   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.655381   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:22.655442   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:22.673481   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.673508   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:22.673562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:22.691542   61028 logs.go:276] 0 containers: []
	W0229 18:44:22.691563   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:22.691573   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:22.691583   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:22.741934   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:22.741964   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:22.760644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:22.760681   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:22.838701   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:22.838724   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:22.838737   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:22.879863   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:22.879892   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.442546   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:25.456540   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:25.476142   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.476168   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:25.476213   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:25.494185   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.494216   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:25.494275   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:25.517155   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.517187   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:25.517251   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:25.535776   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.535805   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:25.535864   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:25.554255   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.554283   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:25.554326   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:25.571356   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.571383   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:25.571438   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:25.589129   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.589158   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:25.589218   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:25.607610   61028 logs.go:276] 0 containers: []
	W0229 18:44:25.607654   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:25.607667   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:25.607683   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:25.669924   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:25.669954   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:21.145364   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:23.146563   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:25.146956   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:26.132356   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:44:26.132385   63014 machine.go:91] provisioned docker machine in 1.75205798s
	I0229 18:44:26.132402   63014 start.go:300] post-start starting for "newest-cni-555986" (driver="kvm2")
	I0229 18:44:26.132418   63014 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:44:26.132438   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.132741   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:44:26.132770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.135459   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135816   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.135839   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.135993   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.136198   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.136380   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.136509   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.220695   63014 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:44:26.225534   63014 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:44:26.225565   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
	I0229 18:44:26.225648   63014 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
	I0229 18:44:26.225753   63014 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
	I0229 18:44:26.225877   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:44:26.236218   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:26.260637   63014 start.go:303] post-start completed in 128.220021ms
	I0229 18:44:26.260663   63014 fix.go:56] fixHost completed within 20.546314149s
	I0229 18:44:26.260683   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.263403   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263761   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.263791   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.263979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.264190   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264376   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.264513   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.264704   63014 main.go:141] libmachine: Using SSH client type: native
	I0229 18:44:26.264952   63014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil>  [] 0s} 192.168.61.240 22 <nil> <nil>}
	I0229 18:44:26.264972   63014 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:44:26.364534   63014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232266.337605764
	
	I0229 18:44:26.364556   63014 fix.go:206] guest clock: 1709232266.337605764
	I0229 18:44:26.364566   63014 fix.go:219] Guest: 2024-02-29 18:44:26.337605764 +0000 UTC Remote: 2024-02-29 18:44:26.260667088 +0000 UTC m=+20.709360868 (delta=76.938676ms)
	I0229 18:44:26.364589   63014 fix.go:190] guest clock delta is within tolerance: 76.938676ms
	I0229 18:44:26.364595   63014 start.go:83] releasing machines lock for "newest-cni-555986", held for 20.650256948s
	I0229 18:44:26.364617   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.364856   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:26.367497   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.367884   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.367914   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.368067   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368783   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:26.368848   63014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:44:26.368893   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.369018   63014 ssh_runner.go:195] Run: cat /version.json
	I0229 18:44:26.369042   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:26.371814   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372058   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372134   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372159   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372329   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372406   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:26.372429   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:26.372486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372561   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:26.372642   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.372759   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:26.372837   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.372910   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:26.373031   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:26.471860   63014 ssh_runner.go:195] Run: systemctl --version
	I0229 18:44:26.478160   63014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:44:26.483953   63014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:44:26.484004   63014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:44:26.501209   63014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:44:26.501232   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.501345   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.520439   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:44:26.532631   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:44:26.544776   63014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:44:26.544846   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:44:26.556908   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.571173   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:44:26.584793   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:44:26.599578   63014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:44:26.613065   63014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:44:26.625963   63014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:44:26.636208   63014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:44:26.647304   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:26.773666   63014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:44:26.805201   63014 start.go:475] detecting cgroup driver to use...
	I0229 18:44:26.805282   63014 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:44:26.828840   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.845685   63014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:44:26.864281   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:44:26.878719   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.891594   63014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:44:26.918028   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:44:26.932594   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:44:26.953389   63014 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:44:26.957403   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:44:26.966554   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:44:26.983908   63014 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:44:27.099127   63014 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:44:27.229263   63014 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:44:27.229402   63014 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:44:27.248050   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:27.370928   63014 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:44:28.846692   63014 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.475728413s)
	I0229 18:44:28.846793   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:44:28.862710   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:28.876125   63014 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:44:28.990050   63014 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:44:29.111415   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.241702   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:44:29.259418   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:44:29.274090   63014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:44:29.405739   63014 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:44:29.483337   63014 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:44:29.483415   63014 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:44:29.489731   63014 start.go:543] Will wait 60s for crictl version
	I0229 18:44:29.489807   63014 ssh_runner.go:195] Run: which crictl
	I0229 18:44:29.493965   63014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:44:29.551137   63014 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:44:29.551214   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.585366   63014 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:44:29.616533   63014 out.go:204] * Preparing Kubernetes v1.29.0-rc.2 on Docker 24.0.7 ...
	I0229 18:44:29.616588   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetIP
	I0229 18:44:29.619293   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619645   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:29.619671   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:29.619927   63014 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0229 18:44:29.624040   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.638664   63014 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0229 18:44:29.640035   63014 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 18:44:29.640131   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.661958   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.662001   63014 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:44:29.662060   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:29.681050   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:29.681077   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:29.681146   63014 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:44:29.705900   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:29.705930   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:29.705950   63014 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0229 18:44:29.705973   63014 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.240 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-555986 NodeName:newest-cni-555986 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.61.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:44:29.706192   63014 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-555986"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:44:29.706334   63014 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-555986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:44:29.706410   63014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0229 18:44:29.717785   63014 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:44:29.717857   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:44:29.728573   63014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0229 18:44:29.746192   63014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0229 18:44:29.763094   63014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0229 18:44:29.780941   63014 ssh_runner.go:195] Run: grep 192.168.61.240	control-plane.minikube.internal$ /etc/hosts
	I0229 18:44:29.784664   63014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:44:29.796533   63014 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986 for IP: 192.168.61.240
	I0229 18:44:29.796569   63014 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:29.796698   63014 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
	I0229 18:44:29.796746   63014 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
	I0229 18:44:29.796809   63014 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/client.key
	I0229 18:44:29.796890   63014 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key.0e2de265
	I0229 18:44:29.796948   63014 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key
	I0229 18:44:29.797064   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
	W0229 18:44:29.797094   63014 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
	I0229 18:44:29.797103   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
	I0229 18:44:29.797124   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
	I0229 18:44:29.797154   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
	I0229 18:44:29.797188   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
	I0229 18:44:29.797243   63014 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
	I0229 18:44:29.797875   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:44:29.822101   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:44:29.847169   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:44:29.871405   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/newest-cni-555986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:44:29.898154   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:44:29.931310   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:44:29.957589   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:44:29.983801   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0229 18:44:30.011017   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
	I0229 18:44:30.037607   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:44:30.067042   63014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
	I0229 18:44:30.092561   63014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:44:30.111494   63014 ssh_runner.go:195] Run: openssl version
	I0229 18:44:30.117488   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
	I0229 18:44:30.128877   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133493   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.133540   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
	I0229 18:44:30.139567   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
	I0229 18:44:30.150842   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
	I0229 18:44:30.161780   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166396   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.166447   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
	I0229 18:44:30.172649   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:44:30.183406   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:44:30.194175   63014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198677   63014 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.198732   63014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:44:30.204430   63014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:44:30.215298   63014 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:44:30.219939   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:44:30.225927   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:44:30.231724   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:44:30.237680   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:44:30.243550   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:44:30.249342   63014 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:44:30.255106   63014 kubeadm.go:404] StartCluster: {Name:newest-cni-555986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-555986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false s
ystem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:44:30.255230   63014 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:30.272612   63014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:44:30.283794   63014 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:44:30.283824   63014 kubeadm.go:636] restartCluster start
	I0229 18:44:30.283885   63014 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:44:30.295185   63014 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:30.296063   63014 kubeconfig.go:135] verify returned: extract IP: "newest-cni-555986" does not appear in /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:30.296546   63014 kubeconfig.go:146] "newest-cni-555986" context is missing from /home/jenkins/minikube-integration/18259-6402/kubeconfig - will repair!
	I0229 18:44:30.297381   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:30.299196   63014 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:44:30.309378   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.309439   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.322034   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:25.721765   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:25.721797   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:25.748884   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:25.748919   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:25.862593   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:25.862613   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:25.862627   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.412364   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:28.426168   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:28.444018   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.444048   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:28.444104   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:28.462393   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.462422   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:28.462481   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:28.480993   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.481021   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:28.481065   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:28.498930   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.498974   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:28.499034   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:28.517355   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.517386   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:28.517452   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:28.536493   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.536522   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:28.536629   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:28.554364   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.554392   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:28.554448   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:28.573203   61028 logs.go:276] 0 containers: []
	W0229 18:44:28.573229   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:28.573241   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:28.573260   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:28.628788   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:28.628820   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:28.647595   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:28.647631   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:28.726195   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:28.726215   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:28.726228   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:28.783540   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:28.783575   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:27.147370   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:29.653339   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:30.810019   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:30.810100   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:30.822777   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.310338   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.310472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.324112   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.809551   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:31.809687   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:31.822657   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.310271   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.310348   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.324846   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:32.810460   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:32.810534   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:32.824072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.309541   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.309620   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.323749   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:33.810371   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:33.810472   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:33.823564   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.309724   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.309805   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.322875   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:34.809427   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:34.809539   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:34.823871   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:35.310485   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.310554   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.324367   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:31.358413   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:31.374228   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:31.392618   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.392649   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:31.392713   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:31.411406   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.411437   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:31.411497   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:31.431126   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.431157   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:31.431204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:31.451504   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.451531   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:31.451571   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:31.470318   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.470339   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:31.470388   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:31.489264   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.489289   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:31.489341   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:31.507636   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.507672   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:31.507730   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:31.526580   61028 logs.go:276] 0 containers: []
	W0229 18:44:31.526602   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:31.526614   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:31.526634   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:31.568164   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:31.568199   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:31.627762   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:31.627786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:31.678480   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:31.678514   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:31.695623   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:31.695659   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:31.793131   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.293320   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:34.307693   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:34.328775   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.328805   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:34.328863   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:34.347049   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.347075   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:34.347126   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:34.365903   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.365933   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:34.365993   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:34.383898   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.383932   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:34.383995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:34.402605   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.402632   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:34.402694   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:34.420889   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.420918   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:34.420976   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:34.439973   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.440000   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:34.440059   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:34.457452   61028 logs.go:276] 0 containers: []
	W0229 18:44:34.457483   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:34.457496   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:34.457510   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:34.505134   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:34.505167   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:34.520181   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:34.520212   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:34.589435   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:34.589455   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:34.589466   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:34.634139   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:34.634168   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:32.149594   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:34.645888   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:35.809842   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:35.809911   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:35.823992   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.309548   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.309649   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.322861   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:36.810470   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:36.810541   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:36.824023   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.309492   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.309593   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.323072   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:37.809581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:37.809688   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:37.822964   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.309476   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.309584   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.322909   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:38.810487   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:38.810602   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:38.824118   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.309581   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.309683   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.323438   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:39.810045   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:39.810149   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:39.823071   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.309893   63014 api_server.go:166] Checking apiserver status ...
	I0229 18:44:40.309956   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:44:40.326570   63014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:44:40.326600   63014 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0229 18:44:40.326612   63014 kubeadm.go:1135] stopping kube-system containers ...
	I0229 18:44:40.326684   63014 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:44:40.350696   63014 docker.go:483] Stopping containers: [c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3]
	I0229 18:44:40.350775   63014 ssh_runner.go:195] Run: docker stop c19ea3451cd2 88a8b4a37a99 70f2ddd5234e 00bb6fa8abda 5ff0c86feaf3 6320f118d157 ea9f78556237 6930407a7128 ccc8393fded7 c1567139efc3 685917db87aa 2017842b803f 31eb102faed8 9453dc170c08 3ca8a70e4e7b 5e176b8058b3
	I0229 18:44:40.379218   63014 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 18:44:40.406202   63014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:40.418532   63014 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:40.418593   63014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430345   63014 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:40.430371   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:40.561772   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:37.197653   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:37.211167   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:37.233259   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.233294   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:37.233349   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:37.254237   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.254264   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:37.254322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:37.274320   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.274347   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:37.274401   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:37.292854   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.292880   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:37.292929   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:37.310405   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.310429   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:37.310466   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:37.328374   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.328394   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:37.328434   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:37.345294   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.345321   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:37.345383   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:37.362743   61028 logs.go:276] 0 containers: []
	W0229 18:44:37.362768   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:37.362779   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:37.362793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:37.410877   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:37.410914   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:37.425653   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:37.425689   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:37.490957   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:37.490981   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:37.490994   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:37.530316   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:37.530344   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:40.088251   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:40.102064   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:40.121304   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.121338   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:40.121392   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:40.139634   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.139682   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:40.139742   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:40.156924   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.156950   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:40.156995   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:40.174050   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.174076   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:40.174117   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:40.191417   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.191444   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:40.191488   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:40.209488   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.209515   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:40.209578   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:40.226753   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.226775   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:40.226828   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:40.244478   61028 logs.go:276] 0 containers: []
	W0229 18:44:40.244505   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:40.244516   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:40.244526   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:40.299257   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:40.299293   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:40.316326   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:40.316356   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:40.407508   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:40.407531   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:40.407545   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:40.450989   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:40.451022   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:37.145550   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:39.645463   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:41.139942   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.337079   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.447658   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:41.519164   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:41.519271   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.020352   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:42.519558   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.020287   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.051672   63014 api_server.go:72] duration metric: took 1.532507495s to wait for apiserver process to appear ...
	I0229 18:44:43.051702   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:43.051723   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.052327   63014 api_server.go:269] stopped: https://192.168.61.240:8443/healthz: Get "https://192.168.61.240:8443/healthz": dial tcp 192.168.61.240:8443: connect: connection refused
	I0229 18:44:43.552797   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:43.024851   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:43.040954   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:43.067062   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.067087   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:43.067142   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:43.112898   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.112929   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:43.112987   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:43.144432   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.144516   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:43.144577   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:43.180141   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.180170   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:43.180217   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:43.203493   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.203521   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:43.203562   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:43.227035   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.227065   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:43.227120   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:43.247867   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.247897   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:43.247959   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:43.269511   61028 logs.go:276] 0 containers: []
	W0229 18:44:43.269538   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:43.269550   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:43.269566   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:43.287349   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:43.287380   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:43.368033   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:43.368051   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:43.368062   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:43.425200   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:43.425235   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:43.492870   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:43.492906   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:41.648546   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:44.146476   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:46.415578   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.415614   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.415633   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.462403   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 18:44:46.462439   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:44:46.552650   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:46.559420   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:46.559454   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.052823   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.059079   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.059117   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:47.552719   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:47.561838   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 18:44:47.561869   63014 api_server.go:103] status: https://192.168.61.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:44:48.052436   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:48.057072   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:48.064135   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:48.064164   63014 api_server.go:131] duration metric: took 5.012454851s to wait for apiserver health ...
	I0229 18:44:48.064173   63014 cni.go:84] Creating CNI manager for ""
	I0229 18:44:48.064185   63014 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:44:48.066074   63014 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 18:44:48.067507   63014 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 18:44:48.078593   63014 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 18:44:48.102538   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:48.114933   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:48.114965   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114972   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:48.114979   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:48.114985   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:48.114990   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:48.114995   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 18:44:48.115002   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:48.115006   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:48.115011   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 18:44:48.115017   63014 system_pods.go:74] duration metric: took 12.45428ms to wait for pod list to return data ...
	I0229 18:44:48.115024   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:48.118425   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:48.118453   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:48.118465   63014 node_conditions.go:105] duration metric: took 3.434927ms to run NodePressure ...
	I0229 18:44:48.118487   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 18:44:48.394218   63014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:44:48.407374   63014 ops.go:34] apiserver oom_adj: -16
	I0229 18:44:48.407397   63014 kubeadm.go:640] restartCluster took 18.123565128s
	I0229 18:44:48.407408   63014 kubeadm.go:406] StartCluster complete in 18.152305653s
	I0229 18:44:48.407427   63014 settings.go:142] acquiring lock: {Name:mk85324150508323d0a817853e472a1fdcadc314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.407503   63014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 18:44:48.408551   63014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/kubeconfig: {Name:mkede6c98b96f796a1583193f11427d41bdcdf0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:44:48.408794   63014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:44:48.408811   63014 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:44:48.408877   63014 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-555986"
	I0229 18:44:48.408884   63014 addons.go:69] Setting dashboard=true in profile "newest-cni-555986"
	I0229 18:44:48.408904   63014 addons.go:234] Setting addon dashboard=true in "newest-cni-555986"
	I0229 18:44:48.408910   63014 addons.go:69] Setting metrics-server=true in profile "newest-cni-555986"
	I0229 18:44:48.408925   63014 addons.go:234] Setting addon metrics-server=true in "newest-cni-555986"
	W0229 18:44:48.408930   63014 addons.go:243] addon dashboard should already be in state true
	W0229 18:44:48.408936   63014 addons.go:243] addon metrics-server should already be in state true
	I0229 18:44:48.408961   63014 addons.go:69] Setting default-storageclass=true in profile "newest-cni-555986"
	I0229 18:44:48.408905   63014 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-555986"
	I0229 18:44:48.408987   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	W0229 18:44:48.408996   63014 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:44:48.408999   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409016   63014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-555986"
	I0229 18:44:48.409070   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.408985   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.409048   63014 cache.go:107] acquiring lock: {Name:mk0db597c024ca72f3d806b204928d2d6d5c0ca9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:44:48.409212   63014 cache.go:115] /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I0229 18:44:48.409221   63014 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 177.702µs
	I0229 18:44:48.409233   63014 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I0229 18:44:48.409247   63014 cache.go:87] Successfully saved all images to host disk.
	I0229 18:44:48.409439   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409451   63014 config.go:182] Loaded profile config "newest-cni-555986": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0229 18:44:48.409463   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409524   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409532   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409545   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409558   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409652   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.409679   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.409964   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.410023   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.414076   63014 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-555986" context rescaled to 1 replicas
	I0229 18:44:48.414110   63014 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.240 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:44:48.416128   63014 out.go:177] * Verifying Kubernetes components...
	I0229 18:44:48.417753   63014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:48.430067   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0229 18:44:48.430297   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0229 18:44:48.430412   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0229 18:44:48.430460   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0229 18:44:48.430866   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.430972   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431065   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431545   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431550   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431548   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.431582   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431566   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.431597   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.431929   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.431972   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432206   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432253   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.432290   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0229 18:44:48.432364   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.432382   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.432574   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.432606   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.432958   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.432959   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.433540   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.433565   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.433650   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.434192   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.434219   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.434624   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.435113   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435154   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.435691   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.435710   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.436522   63014 addons.go:234] Setting addon default-storageclass=true in "newest-cni-555986"
	W0229 18:44:48.436539   63014 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:44:48.436571   63014 host.go:66] Checking if "newest-cni-555986" exists ...
	I0229 18:44:48.436949   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.436982   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.453519   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0229 18:44:48.453637   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0229 18:44:48.454123   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454220   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.454725   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454745   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.454863   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.454877   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.455157   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455208   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.455283   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0229 18:44:48.455442   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455605   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.455688   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.456149   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.456163   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.456470   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.456608   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.456786   63014 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:44:48.456811   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.458869   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.461038   63014 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0229 18:44:48.459183   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.460680   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.461477   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.462531   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 18:44:48.462548   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 18:44:48.462566   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.462647   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.462653   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.462678   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.464438   63014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:44:48.462979   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.465829   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.465902   63014 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.465920   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:44:48.465925   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.465937   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.466007   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
	I0229 18:44:48.466429   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.466991   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.467012   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.467180   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.467205   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.467371   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.467432   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.467587   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.467594   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.467770   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.467913   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.469491   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.472294   63014 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0229 18:44:48.470549   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.470960   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.475176   63014 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0229 18:44:48.473898   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.474017   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.476581   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0229 18:44:48.476603   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0229 18:44:48.475264   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.475413   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.476620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.476878   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.477547   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0229 18:44:48.477887   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.478368   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.478381   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.478677   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.479096   63014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:44:48.479124   63014 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:44:48.480199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480659   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.480684   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.480955   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.481090   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.481242   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.481405   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.494480   63014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I0229 18:44:48.494928   63014 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:44:48.495370   63014 main.go:141] libmachine: Using API Version  1
	I0229 18:44:48.495394   63014 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:44:48.495667   63014 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:44:48.495799   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetState
	I0229 18:44:48.497441   63014 main.go:141] libmachine: (newest-cni-555986) Calling .DriverName
	I0229 18:44:48.497645   63014 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.497657   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:44:48.497667   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHHostname
	I0229 18:44:48.500838   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501326   63014 main.go:141] libmachine: (newest-cni-555986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:53:df", ip: ""} in network mk-newest-cni-555986: {Iface:virbr3 ExpiryTime:2024-02-29 19:44:17 +0000 UTC Type:0 Mac:52:54:00:9b:53:df Iaid: IPaddr:192.168.61.240 Prefix:24 Hostname:newest-cni-555986 Clientid:01:52:54:00:9b:53:df}
	I0229 18:44:48.501380   63014 main.go:141] libmachine: (newest-cni-555986) DBG | domain newest-cni-555986 has defined IP address 192.168.61.240 and MAC address 52:54:00:9b:53:df in network mk-newest-cni-555986
	I0229 18:44:48.501593   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHPort
	I0229 18:44:48.501804   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHKeyPath
	I0229 18:44:48.501963   63014 main.go:141] libmachine: (newest-cni-555986) Calling .GetSSHUsername
	I0229 18:44:48.502090   63014 sshutil.go:53] new ssh client: &{IP:192.168.61.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/newest-cni-555986/id_rsa Username:docker}
	I0229 18:44:48.742737   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 18:44:48.742770   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0229 18:44:48.753599   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0229 18:44:48.753628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0229 18:44:48.765187   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:44:48.781474   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:44:48.837624   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0229 18:44:48.837655   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0229 18:44:48.847412   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 18:44:48.847440   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 18:44:48.878964   63014 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:44:48.879048   63014 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:44:48.879052   63014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:48.879064   63014 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	registry.k8s.io/kube-proxy:v1.29.0-rc.2
	registry.k8s.io/etcd:3.5.10-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:44:48.879082   63014 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:44:48.879095   63014 cache_images.go:262] succeeded pushing to: newest-cni-555986
	I0229 18:44:48.879101   63014 cache_images.go:263] failed pushing to: 
	I0229 18:44:48.879122   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879135   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879510   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:48.879520   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879539   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.879565   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:48.879620   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:48.879876   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:48.879907   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:48.945106   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0229 18:44:48.945130   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0229 18:44:48.946603   63014 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:48.946628   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 18:44:49.013179   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0229 18:44:49.013199   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0229 18:44:49.036118   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 18:44:49.122858   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0229 18:44:49.122892   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0229 18:44:49.215329   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0229 18:44:49.215361   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0229 18:44:49.228881   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.228905   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229150   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229175   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.229199   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.229245   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.229262   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.229590   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.229607   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.236908   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:49.236931   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:49.237194   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:49.237213   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:49.237232   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:49.313570   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0229 18:44:49.313605   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0229 18:44:49.375520   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0229 18:44:49.375549   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0229 18:44:49.445233   63014 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:49.445262   63014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0229 18:44:49.520309   63014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0229 18:44:50.293009   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.511487012s)
	I0229 18:44:50.293056   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293069   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293082   63014 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.413998817s)
	I0229 18:44:50.293122   63014 api_server.go:72] duration metric: took 1.878985811s to wait for apiserver process to appear ...
	I0229 18:44:50.293139   63014 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:44:50.293159   63014 api_server.go:253] Checking apiserver healthz at https://192.168.61.240:8443/healthz ...
	I0229 18:44:50.293390   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293444   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293454   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.293472   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.293486   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.293745   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.293858   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.293880   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.300808   63014 api_server.go:279] https://192.168.61.240:8443/healthz returned 200:
	ok
	I0229 18:44:50.303536   63014 api_server.go:141] control plane version: v1.29.0-rc.2
	I0229 18:44:50.303558   63014 api_server.go:131] duration metric: took 10.411694ms to wait for apiserver health ...
	I0229 18:44:50.303569   63014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:44:50.310252   63014 system_pods.go:59] 9 kube-system pods found
	I0229 18:44:50.310280   63014 system_pods.go:61] "coredns-76f75df574-7sk9v" [3ba565d8-54d9-4674-973a-98f157a47ba7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310290   63014 system_pods.go:61] "coredns-76f75df574-7vxkd" [120c60fa-d672-4077-b1c2-5bba0d1d3c75] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 18:44:50.310298   63014 system_pods.go:61] "etcd-newest-cni-555986" [dfae4678-fa38-41c1-a2e0-ce2ba6088306] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:44:50.310307   63014 system_pods.go:61] "kube-apiserver-newest-cni-555986" [2a74fb80-3d99-4e37-ad6d-3a6607f5323a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:44:50.310316   63014 system_pods.go:61] "kube-controller-manager-newest-cni-555986" [bf49df40-968e-4efc-90f9-d47f78a2c083] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:44:50.310335   63014 system_pods.go:61] "kube-proxy-dsghq" [a3352d42-cd06-4cef-91ea-bc6c994756b6] Running
	I0229 18:44:50.310343   63014 system_pods.go:61] "kube-scheduler-newest-cni-555986" [8bf8ae43-e091-48fa-8f45-0c88218a922d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:44:50.310356   63014 system_pods.go:61] "metrics-server-57f55c9bc5-9slkc" [da889b21-3c80-49d6-aca6-b0903dfb1115] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:44:50.310365   63014 system_pods.go:61] "storage-provisioner" [f83d16ca-74e0-421a-b839-32927649d5b5] Running
	I0229 18:44:50.310376   63014 system_pods.go:74] duration metric: took 6.800137ms to wait for pod list to return data ...
	I0229 18:44:50.310386   63014 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:44:50.313209   63014 default_sa.go:45] found service account: "default"
	I0229 18:44:50.313231   63014 default_sa.go:55] duration metric: took 2.835138ms for default service account to be created ...
	I0229 18:44:50.313244   63014 kubeadm.go:581] duration metric: took 1.899107276s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0229 18:44:50.313262   63014 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:44:50.315732   63014 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:44:50.315752   63014 node_conditions.go:123] node cpu capacity is 2
	I0229 18:44:50.315765   63014 node_conditions.go:105] duration metric: took 2.49465ms to run NodePressure ...
	I0229 18:44:50.315778   63014 start.go:228] waiting for startup goroutines ...
	I0229 18:44:50.412181   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.376016712s)
	I0229 18:44:50.412237   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412253   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412517   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412562   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412602   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412620   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.412632   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.412844   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.412879   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.412886   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.412909   63014 addons.go:470] Verifying addon metrics-server=true in "newest-cni-555986"
	I0229 18:44:50.642086   63014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.121716457s)
	I0229 18:44:50.642146   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642162   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642465   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642487   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642498   63014 main.go:141] libmachine: Making call to close driver server
	I0229 18:44:50.642506   63014 main.go:141] libmachine: (newest-cni-555986) Calling .Close
	I0229 18:44:50.642526   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.642764   63014 main.go:141] libmachine: Successfully made call to close driver server
	I0229 18:44:50.642774   63014 main.go:141] libmachine: Making call to close connection to plugin binary
	I0229 18:44:50.642777   63014 main.go:141] libmachine: (newest-cni-555986) DBG | Closing plugin on server side
	I0229 18:44:50.644564   63014 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-555986 addons enable metrics-server
	
	I0229 18:44:50.646195   63014 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0229 18:44:46.045085   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:46.060842   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:46.080115   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.080151   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:46.080204   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:46.098951   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.098977   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:46.099045   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:46.117884   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.117914   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:46.117962   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:46.135090   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.135122   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:46.135183   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:46.154068   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.154094   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:46.154150   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:46.175259   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.175291   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:46.175348   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:46.199979   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.200010   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:46.200073   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:46.219082   61028 logs.go:276] 0 containers: []
	W0229 18:44:46.219109   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:46.219118   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:46.219129   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:46.285752   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:46.285802   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:46.362896   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:46.362923   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:46.424465   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:46.424496   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:46.440644   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:46.440676   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:46.516207   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.017356   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:49.036558   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:49.062037   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.062073   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:49.062122   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:49.089359   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.089383   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:49.089436   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:49.112366   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.112397   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:49.112447   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:49.135268   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.135300   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:49.135357   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:49.158768   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.158795   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:49.158862   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:49.182032   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.182056   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:49.182100   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:49.202844   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.202880   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:49.202937   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:49.223496   61028 logs.go:276] 0 containers: []
	W0229 18:44:49.223522   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:49.223533   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:49.223548   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:49.283784   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:49.283833   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:49.299408   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:49.299450   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:49.381751   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:49.381777   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:49.381793   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:49.425633   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:49.425671   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:50.647633   63014 addons.go:505] enable addons completed in 2.238822444s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0229 18:44:50.647682   63014 start.go:233] waiting for cluster config update ...
	I0229 18:44:50.647711   63014 start.go:242] writing updated cluster config ...
	I0229 18:44:50.648039   63014 ssh_runner.go:195] Run: rm -f paused
	I0229 18:44:50.699121   63014 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 18:44:50.700743   63014 out.go:177] * Done! kubectl is now configured to use "newest-cni-555986" cluster and "default" namespace by default
	I0229 18:44:46.147159   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:48.147947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:50.646890   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:51.992923   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:52.009101   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:44:52.030751   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.030778   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:44:52.030834   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:44:52.051175   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.051205   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:44:52.051258   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:44:52.070270   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.070292   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:44:52.070346   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:44:52.089729   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.089755   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:44:52.089807   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:44:52.109158   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.109181   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:44:52.109235   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:44:52.127440   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.127464   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:44:52.127509   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:44:52.146458   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.146485   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:44:52.146542   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:44:52.164899   61028 logs.go:276] 0 containers: []
	W0229 18:44:52.164925   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:44:52.164934   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:44:52.164944   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 18:44:52.223827   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:44:52.223870   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:44:52.245832   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:44:52.245869   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:44:52.350010   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:44:52.350037   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:44:52.350051   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:44:52.400763   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:44:52.400792   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:44:54.965688   61028 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:44:54.984737   61028 kubeadm.go:640] restartCluster took 4m13.179905747s
	W0229 18:44:54.984813   61028 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0229 18:44:54.984842   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:44:55.440354   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:44:55.456286   61028 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:44:55.467480   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:44:55.478159   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:44:55.478205   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:44:55.539798   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:44:55.539888   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:44:53.148909   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.149846   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:55.752087   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:44:55.752264   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:44:55.752401   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:44:55.906569   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:44:55.907774   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:44:55.917392   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:44:56.046677   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:44:56.048655   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:44:56.048771   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:44:56.048874   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:44:56.048992   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:44:56.052691   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:44:56.052805   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:44:56.052890   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:44:56.052984   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:44:56.053096   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:44:56.053215   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:44:56.053320   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:44:56.053379   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:44:56.053475   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:44:56.176574   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:44:56.329888   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:44:56.623253   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:44:56.722273   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:44:56.723020   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:44:56.724880   61028 out.go:204]   - Booting up control plane ...
	I0229 18:44:56.725005   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:44:56.730320   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:44:56.731630   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:44:56.732332   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:44:56.734500   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:44:57.646118   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:44:59.648032   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:02.144840   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:04.145112   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:06.146649   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:08.647051   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:11.148318   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:13.646816   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:16.145165   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:18.146437   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:20.147686   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:22.645925   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:25.146444   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:27.645765   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:29.646621   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:31.647146   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:34.145657   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:36.735482   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:45:36.736181   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:36.736433   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:36.145891   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:38.149811   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:40.646401   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:41.737158   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:41.737332   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:43.145942   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:45.146786   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:47.648714   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:50.145240   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:51.737722   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:45:51.737923   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:45:52.145341   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:54.145559   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:56.646087   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:45:58.646249   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:00.646466   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:02.647293   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:05.146452   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:07.646128   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:10.147008   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:11.738541   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:11.738773   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:12.646406   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:14.647319   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:17.146097   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:19.146615   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:21.147384   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:23.646155   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:25.647369   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:28.146558   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:30.645408   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:32.649260   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:34.650076   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:37.146414   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:39.146947   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:41.645903   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:43.646016   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:45.646056   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:47.646659   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:49.647440   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:51.739942   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:46:51.740223   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:46:51.740253   61028 kubeadm.go:322] 
	I0229 18:46:51.740302   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:46:51.740342   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:46:51.740349   61028 kubeadm.go:322] 
	I0229 18:46:51.740377   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:46:51.740404   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:46:51.740528   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:46:51.740544   61028 kubeadm.go:322] 
	I0229 18:46:51.740646   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:46:51.740675   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:46:51.740726   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:46:51.740736   61028 kubeadm.go:322] 
	I0229 18:46:51.740844   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:46:51.740950   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:46:51.741029   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:46:51.741103   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:46:51.741204   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:46:51.741261   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:46:51.742036   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:46:51.742190   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:46:51.742337   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:46:51.742464   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:46:51.742640   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:46:51.742725   61028 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:46:51.742786   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:46:52.197144   61028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:46:52.214163   61028 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:46:52.226374   61028 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:46:52.226416   61028 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:46:52.285152   61028 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 18:46:52.285314   61028 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:46:52.500283   61028 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:46:52.500430   61028 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:46:52.500558   61028 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:46:52.672731   61028 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:46:52.672847   61028 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:46:52.681682   61028 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 18:46:52.809851   61028 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:46:52.811832   61028 out.go:204]   - Generating certificates and keys ...
	I0229 18:46:52.811937   61028 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:46:52.812027   61028 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:46:52.812099   61028 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:46:52.812153   61028 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:46:52.812252   61028 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:46:52.812333   61028 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:46:52.812427   61028 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:46:52.812513   61028 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:46:52.812652   61028 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:46:52.813069   61028 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:46:52.813244   61028 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:46:52.813324   61028 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:46:52.931955   61028 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:46:53.294257   61028 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:46:53.376114   61028 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:46:53.620085   61028 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:46:53.620974   61028 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:46:53.622696   61028 out.go:204]   - Booting up control plane ...
	I0229 18:46:53.622772   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:46:53.627326   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:46:53.628386   61028 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:46:53.629224   61028 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:46:53.632638   61028 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:46:52.145625   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:54.146306   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:56.146385   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:46:58.649533   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:01.145784   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:03.648061   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:06.145955   60121 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace has status "Ready":"False"
	I0229 18:47:07.645834   60121 pod_ready.go:81] duration metric: took 4m0.007156334s waiting for pod "metrics-server-57f55c9bc5-w95ms" in "kube-system" namespace to be "Ready" ...
	E0229 18:47:07.645859   60121 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0229 18:47:07.645869   60121 pod_ready.go:38] duration metric: took 4m1.184866089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:47:07.645887   60121 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:47:07.645945   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:07.671520   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:07.671613   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:07.691503   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:07.691571   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:07.710557   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:07.710627   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:07.730780   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:07.730868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:07.749894   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:07.749981   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:07.772545   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:07.772620   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:07.791523   60121 logs.go:276] 0 containers: []
	W0229 18:47:07.791554   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:07.791604   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:07.812744   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:07.812833   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:07.831469   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:07.831505   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:07.831515   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:07.904596   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:07.904778   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:07.929197   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:07.929234   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:07.965399   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:07.965430   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:07.997552   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:07.997582   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:08.043918   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:08.043954   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:08.068540   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:08.068569   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:08.093297   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:08.093326   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:08.160393   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:08.160432   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:08.234099   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:08.234128   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:08.249381   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:08.249406   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:08.411423   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:08.411457   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:08.440486   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:08.440516   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:08.474207   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:08.474320   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:08.498143   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498169   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:08.498225   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:08.498241   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:08.498252   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:08.498266   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:08.498277   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:18.499396   60121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:47:18.517660   60121 api_server.go:72] duration metric: took 4m15.022647547s to wait for apiserver process to appear ...
	I0229 18:47:18.517688   60121 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:47:18.517766   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:18.542263   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:18.542333   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:18.565885   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:18.565964   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:18.585135   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:18.585213   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:18.605789   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:18.605850   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:18.624993   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:18.625062   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:18.648049   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:18.648118   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:18.668689   60121 logs.go:276] 0 containers: []
	W0229 18:47:18.668713   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:18.668759   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:18.691741   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:18.691813   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:18.713776   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:18.713810   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:18.713823   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:18.781369   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:18.781564   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:18.808924   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:18.808965   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:18.824723   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:18.824756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:18.854531   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:18.854576   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:18.897618   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:18.897650   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:18.936914   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:18.936946   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:19.011250   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:19.011280   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:19.075817   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:19.075850   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:19.200261   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:19.200299   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:19.236988   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:19.237015   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:19.269721   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:19.269750   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:19.296918   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:19.296944   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:19.319721   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:19.319753   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:19.342330   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342355   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:19.342410   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:19.342423   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:19.342429   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:19.342437   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:19.342447   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:29.343918   60121 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8444/healthz ...
	I0229 18:47:29.350861   60121 api_server.go:279] https://192.168.39.148:8444/healthz returned 200:
	ok
	I0229 18:47:29.352541   60121 api_server.go:141] control plane version: v1.28.4
	I0229 18:47:29.352560   60121 api_server.go:131] duration metric: took 10.834865386s to wait for apiserver health ...
	I0229 18:47:29.352569   60121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:47:29.352633   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:47:29.373466   60121 logs.go:276] 1 containers: [a6c30185a4c6]
	I0229 18:47:29.373535   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:47:29.394287   60121 logs.go:276] 1 containers: [e2afcba737ca]
	I0229 18:47:29.394375   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:47:29.415331   60121 logs.go:276] 1 containers: [51873fe1b3a4]
	I0229 18:47:29.415410   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:47:29.436682   60121 logs.go:276] 1 containers: [710b98bbbd9a]
	I0229 18:47:29.436764   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:47:29.456935   60121 logs.go:276] 1 containers: [515bab7887a3]
	I0229 18:47:29.457003   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:47:29.475799   60121 logs.go:276] 1 containers: [6fc8d7000dc4]
	I0229 18:47:29.475868   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:47:29.496876   60121 logs.go:276] 0 containers: []
	W0229 18:47:29.496904   60121 logs.go:278] No container was found matching "kindnet"
	I0229 18:47:29.496963   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:47:29.516724   60121 logs.go:276] 1 containers: [b4713066c769]
	I0229 18:47:29.516794   60121 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0229 18:47:29.535652   60121 logs.go:276] 1 containers: [19c7b79202ca]
	I0229 18:47:29.535683   60121 logs.go:123] Gathering logs for kube-proxy [515bab7887a3] ...
	I0229 18:47:29.535693   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 515bab7887a3"
	I0229 18:47:29.559535   60121 logs.go:123] Gathering logs for kubernetes-dashboard [b4713066c769] ...
	I0229 18:47:29.559563   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4713066c769"
	I0229 18:47:29.587928   60121 logs.go:123] Gathering logs for storage-provisioner [19c7b79202ca] ...
	I0229 18:47:29.587952   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19c7b79202ca"
	I0229 18:47:29.610085   60121 logs.go:123] Gathering logs for Docker ...
	I0229 18:47:29.610111   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:47:29.673987   60121 logs.go:123] Gathering logs for container status ...
	I0229 18:47:29.674033   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:47:29.751324   60121 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:47:29.751355   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0229 18:47:29.876322   60121 logs.go:123] Gathering logs for coredns [51873fe1b3a4] ...
	I0229 18:47:29.876347   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51873fe1b3a4"
	I0229 18:47:29.900325   60121 logs.go:123] Gathering logs for kube-scheduler [710b98bbbd9a] ...
	I0229 18:47:29.900349   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710b98bbbd9a"
	I0229 18:47:29.936137   60121 logs.go:123] Gathering logs for etcd [e2afcba737ca] ...
	I0229 18:47:29.936167   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2afcba737ca"
	I0229 18:47:29.969468   60121 logs.go:123] Gathering logs for kube-controller-manager [6fc8d7000dc4] ...
	I0229 18:47:29.969499   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fc8d7000dc4"
	I0229 18:47:30.017539   60121 logs.go:123] Gathering logs for kubelet ...
	I0229 18:47:30.017587   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:47:30.093486   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.093682   60121 logs.go:138] Found kubelet problem: Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.124169   60121 logs.go:123] Gathering logs for dmesg ...
	I0229 18:47:30.124211   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:47:30.140725   60121 logs.go:123] Gathering logs for kube-apiserver [a6c30185a4c6] ...
	I0229 18:47:30.140756   60121 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a6c30185a4c6"
	I0229 18:47:30.174590   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174628   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 18:47:30.174694   60121 out.go:239] X Problems detected in kubelet:
	W0229 18:47:30.174708   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: W0229 18:43:07.024663    9820 reflector.go:535] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	W0229 18:47:30.174715   60121 out.go:239]   Feb 29 18:43:07 default-k8s-diff-port-270866 kubelet[9820]: E0229 18:43:07.024705    9820 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-270866" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'default-k8s-diff-port-270866' and this object
	I0229 18:47:30.174726   60121 out.go:304] Setting ErrFile to fd 2...
	I0229 18:47:30.174731   60121 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:33.634399   61028 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:47:33.635096   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:33.635349   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:40.185191   60121 system_pods.go:59] 8 kube-system pods found
	I0229 18:47:40.185222   60121 system_pods.go:61] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.185227   60121 system_pods.go:61] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.185232   60121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.185235   60121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.185238   60121 system_pods.go:61] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.185241   60121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.185247   60121 system_pods.go:61] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.185251   60121 system_pods.go:61] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.185257   60121 system_pods.go:74] duration metric: took 10.832681757s to wait for pod list to return data ...
	I0229 18:47:40.185264   60121 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:47:40.188055   60121 default_sa.go:45] found service account: "default"
	I0229 18:47:40.188075   60121 default_sa.go:55] duration metric: took 2.8056ms for default service account to be created ...
	I0229 18:47:40.188083   60121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:47:40.199288   60121 system_pods.go:86] 8 kube-system pods found
	I0229 18:47:40.199317   60121 system_pods.go:89] "coredns-5dd5756b68-jdlzl" [dad557b0-e5cb-412d-a8f4-4183136089fa] Running
	I0229 18:47:40.199325   60121 system_pods.go:89] "etcd-default-k8s-diff-port-270866" [c0d589ed-b1f2-4c68-a816-a690d2f5f85b] Running
	I0229 18:47:40.199330   60121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-270866" [b23ff12d-b067-4d20-9ec6-246c621c645f] Running
	I0229 18:47:40.199335   60121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-270866" [475ddc96-bca1-4107-b5fe-d1b5f6a606a8] Running
	I0229 18:47:40.199340   60121 system_pods.go:89] "kube-proxy-94www" [7f22c0eb-9843-4473-a19c-926569888bd1] Running
	I0229 18:47:40.199347   60121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-270866" [b5e17115-a696-4662-b963-542b69988077] Running
	I0229 18:47:40.199359   60121 system_pods.go:89] "metrics-server-57f55c9bc5-w95ms" [b0448782-c240-4b77-8227-cf05bee26427] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 18:47:40.199369   60121 system_pods.go:89] "storage-provisioner" [4b2f2255-040b-44fd-876d-622d11bb639f] Running
	I0229 18:47:40.199383   60121 system_pods.go:126] duration metric: took 11.294328ms to wait for k8s-apps to be running ...
	I0229 18:47:40.199394   60121 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:47:40.199452   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:47:40.216650   60121 system_svc.go:56] duration metric: took 17.247343ms WaitForService to wait for kubelet.
	I0229 18:47:40.216679   60121 kubeadm.go:581] duration metric: took 4m36.72166867s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:47:40.216705   60121 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:47:40.220111   60121 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:47:40.220142   60121 node_conditions.go:123] node cpu capacity is 2
	I0229 18:47:40.220157   60121 node_conditions.go:105] duration metric: took 3.446433ms to run NodePressure ...
	I0229 18:47:40.220172   60121 start.go:228] waiting for startup goroutines ...
	I0229 18:47:40.220180   60121 start.go:233] waiting for cluster config update ...
	I0229 18:47:40.220193   60121 start.go:242] writing updated cluster config ...
	I0229 18:47:40.220531   60121 ssh_runner.go:195] Run: rm -f paused
	I0229 18:47:40.268347   60121 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:47:40.270302   60121 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-270866" cluster and "default" namespace by default
	I0229 18:47:38.635813   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:38.636020   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:47:48.636649   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:47:48.636873   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:08.637971   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:08.638214   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639456   61028 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:48:48.639757   61028 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:48:48.639779   61028 kubeadm.go:322] 
	I0229 18:48:48.639840   61028 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 18:48:48.639924   61028 kubeadm.go:322] 	timed out waiting for the condition
	I0229 18:48:48.639950   61028 kubeadm.go:322] 
	I0229 18:48:48.640004   61028 kubeadm.go:322] This error is likely caused by:
	I0229 18:48:48.640046   61028 kubeadm.go:322] 	- The kubelet is not running
	I0229 18:48:48.640168   61028 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:48:48.640178   61028 kubeadm.go:322] 
	I0229 18:48:48.640273   61028 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:48:48.640313   61028 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 18:48:48.640347   61028 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 18:48:48.640353   61028 kubeadm.go:322] 
	I0229 18:48:48.640439   61028 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:48:48.640559   61028 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 18:48:48.640671   61028 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:48:48.640752   61028 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:48:48.640864   61028 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 18:48:48.640919   61028 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 18:48:48.641703   61028 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:48:48.641878   61028 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 18:48:48.641968   61028 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:48:48.642071   61028 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:48:48.642249   61028 kubeadm.go:406] StartCluster complete in 8m6.867140018s
	I0229 18:48:48.642265   61028 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:48:48.642322   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:48:48.674320   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.674348   61028 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:48:48.674398   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:48:48.695124   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.695148   61028 logs.go:278] No container was found matching "etcd"
	I0229 18:48:48.695190   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:48:48.712218   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.712245   61028 logs.go:278] No container was found matching "coredns"
	I0229 18:48:48.712299   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:48:48.730912   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.730939   61028 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:48:48.730982   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:48:48.748542   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.748576   61028 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:48:48.748622   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:48:48.765544   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.765570   61028 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:48:48.765623   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:48:48.791193   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.791238   61028 logs.go:278] No container was found matching "kindnet"
	I0229 18:48:48.791296   61028 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0229 18:48:48.813084   61028 logs.go:276] 0 containers: []
	W0229 18:48:48.813119   61028 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0229 18:48:48.813132   61028 logs.go:123] Gathering logs for dmesg ...
	I0229 18:48:48.813144   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:48:48.834348   61028 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:48:48.834373   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:48:48.911451   61028 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:48:48.911473   61028 logs.go:123] Gathering logs for Docker ...
	I0229 18:48:48.911485   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:48:48.954088   61028 logs.go:123] Gathering logs for container status ...
	I0229 18:48:48.954119   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0229 18:48:49.019061   61028 logs.go:123] Gathering logs for kubelet ...
	I0229 18:48:49.019092   61028 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:48:49.067347   61028 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:48:49.067396   61028 out.go:239] * 
	W0229 18:48:49.067456   61028 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.067477   61028 out.go:239] * 
	W0229 18:48:49.068210   61028 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:48:49.072114   61028 out.go:177] 
	W0229 18:48:49.073581   61028 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:48:49.073626   61028 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:48:49.073649   61028 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:48:49.075293   61028 out.go:177] 
	
	
	==> Docker <==
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050425153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050467385Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050514780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050552148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050590447Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050660627Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050699694Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050735468Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050781822Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.050897158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051019076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051064571Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051441623Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051565243Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051659095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 18:40:38 old-k8s-version-467811 dockerd[1062]: time="2024-02-29T18:40:38.051747686Z" level=info msg="containerd successfully booted in 0.034113s"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.252862682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.297343935Z" level=info msg="Loading containers: start."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.417489065Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.467932343Z" level=info msg="Loading containers: done."
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482234448Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.482355814Z" level=info msg="Daemon has completed initialization"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.517930017Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 18:40:40 old-k8s-version-467811 dockerd[1056]: time="2024-02-29T18:40:40.518369987Z" level=info msg="API listen on [::]:2376"
	Feb 29 18:40:40 old-k8s-version-467811 systemd[1]: Started Docker Application Container Engine.
	
	
	==> container status <==
	time="2024-02-29T19:03:54Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb29 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056516] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.804914] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.680946] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.687335] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.061500] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060694] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +1.140707] systemd-fstab-generator[780]: Ignoring "noauto" option for root device
	[  +0.360984] systemd-fstab-generator[816]: Ignoring "noauto" option for root device
	[  +0.131688] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.149280] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +5.508694] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.066369] kauditd_printk_skb: 236 callbacks suppressed
	[ +16.235011] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.074300] kauditd_printk_skb: 57 callbacks suppressed
	[Feb29 18:44] systemd-fstab-generator[9503]: Ignoring "noauto" option for root device
	[  +0.067712] kauditd_printk_skb: 21 callbacks suppressed
	[Feb29 18:46] systemd-fstab-generator[11264]: Ignoring "noauto" option for root device
	[  +0.072343] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:03:54 up 23 min,  0 users,  load average: 0.41, 0.26, 0.16
	Linux old-k8s-version-467811 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 29 19:03:52 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1360.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: I0229 19:03:53.328015   25694 server.go:410] Version: v1.16.0
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: I0229 19:03:53.328299   25694 plugins.go:100] No cloud provider specified.
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: I0229 19:03:53.328316   25694 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: I0229 19:03:53.330740   25694 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: W0229 19:03:53.331589   25694 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: W0229 19:03:53.331674   25694 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 19:03:53 old-k8s-version-467811 kubelet[25694]: F0229 19:03:53.332213   25694 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1361.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 29 19:03:53 old-k8s-version-467811 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: I0229 19:03:54.079053   25724 server.go:410] Version: v1.16.0
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: I0229 19:03:54.079314   25724 plugins.go:100] No cloud provider specified.
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: I0229 19:03:54.079330   25724 server.go:773] Client rotation is on, will bootstrap in background
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: I0229 19:03:54.085420   25724 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: W0229 19:03:54.088763   25724 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: W0229 19:03:54.089127   25724 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 29 19:03:54 old-k8s-version-467811 kubelet[25724]: F0229 19:03:54.089251   25724 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 29 19:03:54 old-k8s-version-467811 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 29 19:03:54 old-k8s-version-467811 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 2 (247.952137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-467811" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (362.72s)

                                                
                                    

Test pass (286/330)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.13
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
9 TestDownloadOnly/v1.16.0/DeleteAll 0.14
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 4.72
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 11.72
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.56
31 TestOffline 72.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 150.95
38 TestAddons/parallel/Registry 17.18
39 TestAddons/parallel/Ingress 20.57
40 TestAddons/parallel/InspektorGadget 11.34
41 TestAddons/parallel/MetricsServer 6.81
42 TestAddons/parallel/HelmTiller 11.33
44 TestAddons/parallel/CSI 70.27
45 TestAddons/parallel/Headlamp 16.34
46 TestAddons/parallel/CloudSpanner 5.71
47 TestAddons/parallel/LocalPath 55.72
48 TestAddons/parallel/NvidiaDevicePlugin 6.45
49 TestAddons/parallel/Yakd 6
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 13.42
54 TestCertOptions 77.66
55 TestCertExpiration 272.15
56 TestDockerFlags 58.95
57 TestForceSystemdFlag 83.52
58 TestForceSystemdEnv 62.65
60 TestKVMDriverInstallOrUpdate 3.78
64 TestErrorSpam/setup 49.18
65 TestErrorSpam/start 0.37
66 TestErrorSpam/status 0.78
67 TestErrorSpam/pause 1.27
68 TestErrorSpam/unpause 1.42
69 TestErrorSpam/stop 3.25
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 103.53
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 38.67
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.38
81 TestFunctional/serial/CacheCmd/cache/add_local 1.3
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 38.74
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.07
92 TestFunctional/serial/LogsFileCmd 1.09
93 TestFunctional/serial/InvalidService 4.26
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 23.1
97 TestFunctional/parallel/DryRun 0.33
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.13
103 TestFunctional/parallel/ServiceCmdConnect 8.78
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 53.12
107 TestFunctional/parallel/SSHCmd 0.47
108 TestFunctional/parallel/CpCmd 1.56
109 TestFunctional/parallel/MySQL 33.3
110 TestFunctional/parallel/FileSync 0.22
111 TestFunctional/parallel/CertSync 1.53
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.23
119 TestFunctional/parallel/License 0.18
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
122 TestFunctional/parallel/ProfileCmd/profile_list 0.37
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
124 TestFunctional/parallel/MountCmd/any-port 8.75
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.04
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.28
141 TestFunctional/parallel/ImageCommands/Setup 1.28
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.33
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.7
144 TestFunctional/parallel/MountCmd/specific-port 1.71
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.48
146 TestFunctional/parallel/ServiceCmd/List 0.31
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.42
150 TestFunctional/parallel/ServiceCmd/Format 0.39
151 TestFunctional/parallel/ServiceCmd/URL 0.37
152 TestFunctional/parallel/DockerEnv/bash 1.14
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.38
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.63
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.03
160 TestFunctional/delete_addon-resizer_images 0.07
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.02
163 TestGvisorAddon 254.26
166 TestImageBuild/serial/Setup 47.99
167 TestImageBuild/serial/NormalBuild 1.59
168 TestImageBuild/serial/BuildWithBuildArg 1.05
169 TestImageBuild/serial/BuildWithDockerIgnore 0.41
170 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
179 TestJSONOutput/start/Command 64.58
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.63
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 13.11
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.22
207 TestMainNoArgs 0.06
208 TestMinikubeProfile 104.59
211 TestMountStart/serial/StartWithMountFirst 31.02
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 29.72
214 TestMountStart/serial/VerifyMountSecond 0.39
215 TestMountStart/serial/DeleteFirst 0.91
216 TestMountStart/serial/VerifyMountPostDelete 0.4
217 TestMountStart/serial/Stop 2.09
218 TestMountStart/serial/RestartStopped 24.08
219 TestMountStart/serial/VerifyMountPostStop 0.39
222 TestMultiNode/serial/FreshStart2Nodes 122.53
223 TestMultiNode/serial/DeployApp2Nodes 4.6
224 TestMultiNode/serial/PingHostFrom2Pods 0.91
225 TestMultiNode/serial/AddNode 45.63
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 7.48
229 TestMultiNode/serial/StopNode 3.22
230 TestMultiNode/serial/StartAfterStop 24.94
231 TestMultiNode/serial/RestartKeepsNodes 159.6
232 TestMultiNode/serial/DeleteNode 1.72
233 TestMultiNode/serial/StopMultiNode 25.55
234 TestMultiNode/serial/RestartMultiNode 169.36
235 TestMultiNode/serial/ValidateNameConflict 48.93
240 TestPreload 206.63
242 TestScheduledStopUnix 122.68
243 TestSkaffold 146.77
246 TestRunningBinaryUpgrade 226.83
269 TestPause/serial/Start 117.44
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
272 TestNoKubernetes/serial/StartWithK8s 88.35
273 TestPause/serial/SecondStartNoReconfiguration 79.57
274 TestNoKubernetes/serial/StartWithStopK8s 11.44
275 TestNoKubernetes/serial/Start 30.44
276 TestPause/serial/Pause 0.7
277 TestPause/serial/VerifyStatus 0.24
278 TestPause/serial/Unpause 0.61
279 TestPause/serial/PauseAgain 0.78
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
281 TestNoKubernetes/serial/ProfileList 21.65
282 TestPause/serial/DeletePaused 1.03
283 TestPause/serial/VerifyDeletedResources 0.49
284 TestNoKubernetes/serial/Stop 2.39
285 TestNoKubernetes/serial/StartNoArgs 54.25
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
287 TestStoppedBinaryUpgrade/Setup 1.26
288 TestStoppedBinaryUpgrade/Upgrade 153.2
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.74
290 TestNetworkPlugins/group/auto/Start 103.32
291 TestNetworkPlugins/group/kindnet/Start 86.89
292 TestNetworkPlugins/group/calico/Start 139.52
293 TestNetworkPlugins/group/auto/KubeletFlags 0.24
294 TestNetworkPlugins/group/auto/NetCatPod 11.26
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/auto/DNS 0.18
297 TestNetworkPlugins/group/auto/Localhost 0.15
298 TestNetworkPlugins/group/auto/HairPin 0.15
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
300 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
301 TestNetworkPlugins/group/kindnet/DNS 0.27
302 TestNetworkPlugins/group/kindnet/Localhost 0.2
303 TestNetworkPlugins/group/kindnet/HairPin 0.22
304 TestNetworkPlugins/group/custom-flannel/Start 79.08
305 TestNetworkPlugins/group/false/Start 122.99
306 TestNetworkPlugins/group/enable-default-cni/Start 144.61
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.27
309 TestNetworkPlugins/group/calico/NetCatPod 11.27
310 TestNetworkPlugins/group/calico/DNS 0.18
311 TestNetworkPlugins/group/calico/Localhost 0.21
312 TestNetworkPlugins/group/calico/HairPin 0.16
313 TestNetworkPlugins/group/flannel/Start 95.43
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.24
316 TestNetworkPlugins/group/custom-flannel/DNS 0.24
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
319 TestNetworkPlugins/group/bridge/Start 110.91
320 TestNetworkPlugins/group/false/KubeletFlags 0.22
321 TestNetworkPlugins/group/false/NetCatPod 11.24
322 TestNetworkPlugins/group/false/DNS 0.24
323 TestNetworkPlugins/group/false/Localhost 0.19
324 TestNetworkPlugins/group/false/HairPin 0.21
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
327 TestNetworkPlugins/group/kubenet/Start 107.84
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
330 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
331 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
333 TestNetworkPlugins/group/flannel/NetCatPod 12.24
334 TestNetworkPlugins/group/flannel/DNS 0.2
335 TestNetworkPlugins/group/flannel/Localhost 0.18
336 TestNetworkPlugins/group/flannel/HairPin 0.2
340 TestStartStop/group/no-preload/serial/FirstStart 146.4
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
342 TestNetworkPlugins/group/bridge/NetCatPod 13.25
343 TestNetworkPlugins/group/bridge/DNS 0.23
344 TestNetworkPlugins/group/bridge/Localhost 0.18
345 TestNetworkPlugins/group/bridge/HairPin 0.2
347 TestStartStop/group/embed-certs/serial/FirstStart 107.25
348 TestNetworkPlugins/group/kubenet/KubeletFlags 0.22
349 TestNetworkPlugins/group/kubenet/NetCatPod 10.23
350 TestNetworkPlugins/group/kubenet/DNS 0.2
351 TestNetworkPlugins/group/kubenet/Localhost 0.16
352 TestNetworkPlugins/group/kubenet/HairPin 0.19
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.11
355 TestStartStop/group/no-preload/serial/DeployApp 8.32
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
357 TestStartStop/group/embed-certs/serial/DeployApp 8.34
358 TestStartStop/group/no-preload/serial/Stop 13.14
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
360 TestStartStop/group/embed-certs/serial/Stop 13.14
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
362 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
363 TestStartStop/group/no-preload/serial/SecondStart 319.16
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.16
366 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
367 TestStartStop/group/embed-certs/serial/SecondStart 317.1
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 619.88
372 TestStartStop/group/old-k8s-version/serial/Stop 2.14
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.01
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
379 TestStartStop/group/embed-certs/serial/Pause 2.89
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
382 TestStartStop/group/newest-cni/serial/FirstStart 69.99
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
384 TestStartStop/group/no-preload/serial/Pause 2.91
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.93
387 TestStartStop/group/newest-cni/serial/Stop 13.12
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
389 TestStartStop/group/newest-cni/serial/SecondStart 45.54
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
393 TestStartStop/group/newest-cni/serial/Pause 2.45
394 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
397 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.49
x
+
TestDownloadOnly/v1.16.0/json-events (11.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179119 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179119 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (11.127282977s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179119
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179119: exit status 85 (70.897334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-179119 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |          |
	|         | -p download-only-179119        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:37:08
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:37:08.846360   13617 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:37:08.846504   13617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:08.846517   13617 out.go:304] Setting ErrFile to fd 2...
	I0229 17:37:08.846525   13617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:08.846690   13617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	W0229 17:37:08.846816   13617 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18259-6402/.minikube/config/config.json: open /home/jenkins/minikube-integration/18259-6402/.minikube/config/config.json: no such file or directory
	I0229 17:37:08.847366   13617 out.go:298] Setting JSON to true
	I0229 17:37:08.848261   13617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1179,"bootTime":1709227050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:37:08.848326   13617 start.go:139] virtualization: kvm guest
	I0229 17:37:08.850970   13617 out.go:97] [download-only-179119] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:37:08.852475   13617 out.go:169] MINIKUBE_LOCATION=18259
	W0229 17:37:08.851072   13617 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball: no such file or directory
	I0229 17:37:08.851120   13617 notify.go:220] Checking for updates...
	I0229 17:37:08.855354   13617 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:37:08.856937   13617 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:37:08.858415   13617 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:37:08.859884   13617 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:37:08.862385   13617 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:37:08.862621   13617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:37:08.961510   13617 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:37:08.961544   13617 start.go:299] selected driver: kvm2
	I0229 17:37:08.961553   13617 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:37:08.961920   13617 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:08.962055   13617 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:37:08.977348   13617 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:37:08.977397   13617 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:37:08.977883   13617 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:37:08.978041   13617 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:37:08.978118   13617 cni.go:84] Creating CNI manager for ""
	I0229 17:37:08.978141   13617 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 17:37:08.978151   13617 start_flags.go:323] config:
	{Name:download-only-179119 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-179119 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:37:08.978397   13617 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:08.980147   13617 out.go:97] Downloading VM boot image ...
	I0229 17:37:08.980200   13617 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:37:11.493731   13617 out.go:97] Starting control plane node download-only-179119 in cluster download-only-179119
	I0229 17:37:11.493755   13617 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:37:11.515959   13617 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 17:37:11.515994   13617 cache.go:56] Caching tarball of preloaded images
	I0229 17:37:11.516144   13617 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:37:11.517942   13617 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 17:37:11.517958   13617 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:11.541732   13617 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 17:37:14.293910   13617 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:14.294000   13617 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:15.002190   13617 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 17:37:15.002725   13617 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/download-only-179119/config.json ...
	I0229 17:37:15.002770   13617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/download-only-179119/config.json: {Name:mk80a9c678914b81c568d0f355f2c78e46471667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:37:15.002982   13617 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:37:15.003220   13617 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-179119"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-179119
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-948064 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-948064 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (4.721155262s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-948064
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-948064: exit status 85 (71.837655ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-179119 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-179119        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| delete  | -p download-only-179119        | download-only-179119 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| start   | -o=json --download-only        | download-only-948064 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-948064        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:37:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:37:20.323422   13783 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:37:20.323556   13783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:20.323568   13783 out.go:304] Setting ErrFile to fd 2...
	I0229 17:37:20.323575   13783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:20.323819   13783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:37:20.324371   13783 out.go:298] Setting JSON to true
	I0229 17:37:20.325188   13783 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1191,"bootTime":1709227050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:37:20.325251   13783 start.go:139] virtualization: kvm guest
	I0229 17:37:20.327791   13783 out.go:97] [download-only-948064] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:37:20.329782   13783 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:37:20.327989   13783 notify.go:220] Checking for updates...
	I0229 17:37:20.332820   13783 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:37:20.334275   13783 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:37:20.335719   13783 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:37:20.337178   13783 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-948064"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-948064
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-791770 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-791770 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=kvm2 : (11.715549198s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-791770
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-791770: exit status 85 (73.902396ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-179119 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-179119           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| delete  | -p download-only-179119           | download-only-179119 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| start   | -o=json --download-only           | download-only-948064 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-948064           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| delete  | -p download-only-948064           | download-only-948064 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC | 29 Feb 24 17:37 UTC |
	| start   | -o=json --download-only           | download-only-791770 | jenkins | v1.32.0 | 29 Feb 24 17:37 UTC |                     |
	|         | -p download-only-791770           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:37:25
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:37:25.393231   13937 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:37:25.393511   13937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:25.393521   13937 out.go:304] Setting ErrFile to fd 2...
	I0229 17:37:25.393528   13937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:37:25.393738   13937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:37:25.394320   13937 out.go:298] Setting JSON to true
	I0229 17:37:25.395158   13937 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1196,"bootTime":1709227050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:37:25.395224   13937 start.go:139] virtualization: kvm guest
	I0229 17:37:25.397777   13937 out.go:97] [download-only-791770] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:37:25.399605   13937 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:37:25.397974   13937 notify.go:220] Checking for updates...
	I0229 17:37:25.402876   13937 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:37:25.404484   13937 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:37:25.406113   13937 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:37:25.407607   13937 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0229 17:37:25.410502   13937 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:37:25.410748   13937 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:37:25.442394   13937 out.go:97] Using the kvm2 driver based on user configuration
	I0229 17:37:25.442455   13937 start.go:299] selected driver: kvm2
	I0229 17:37:25.442468   13937 start.go:903] validating driver "kvm2" against <nil>
	I0229 17:37:25.442907   13937 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:25.443040   13937 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0229 17:37:25.457476   13937 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0229 17:37:25.457531   13937 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:37:25.458336   13937 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0229 17:37:25.458523   13937 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:37:25.458598   13937 cni.go:84] Creating CNI manager for ""
	I0229 17:37:25.458624   13937 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:37:25.458635   13937 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:37:25.458650   13937 start_flags.go:323] config:
	{Name:download-only-791770 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-791770 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:37:25.458823   13937 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:37:25.460814   13937 out.go:97] Starting control plane node download-only-791770 in cluster download-only-791770
	I0229 17:37:25.460829   13937 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:37:25.486677   13937 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:37:25.486704   13937 cache.go:56] Caching tarball of preloaded images
	I0229 17:37:25.486866   13937 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:37:25.488789   13937 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 17:37:25.488810   13937 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:25.517162   13937 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:37:31.848504   13937 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:31.848595   13937 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:37:32.544479   13937 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 17:37:32.544788   13937 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/download-only-791770/config.json ...
	I0229 17:37:32.544819   13937 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/download-only-791770/config.json: {Name:mk704076c6d91e7e362ec1b52c49af0c76d5bd12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:37:32.544962   13937 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:37:32.545093   13937 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-791770"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-791770
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-219138 --alsologtostderr --binary-mirror http://127.0.0.1:35523 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-219138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-219138
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (72.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-718900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-718900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m11.640338161s)
helpers_test.go:175: Cleaning up "offline-docker-718900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-718900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-718900: (1.045561235s)
--- PASS: TestOffline (72.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-039717
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-039717: exit status 85 (62.428571ms)

                                                
                                                
-- stdout --
	* Profile "addons-039717" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-039717"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-039717
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-039717: exit status 85 (61.371517ms)

                                                
                                                
-- stdout --
	* Profile "addons-039717" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-039717"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (150.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-039717 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-039717 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.949862582s)
--- PASS: TestAddons/Setup (150.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.450771ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9wnkn" [0f1d21af-a601-463c-8968-505369093838] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007336524s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7d2qh" [e3790fbd-c247-43d6-b4ce-8a00d47fc14a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005625878s
addons_test.go:340: (dbg) Run:  kubectl --context addons-039717 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-039717 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-039717 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.254508166s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 ip
2024/02/29 17:40:25 [DEBUG] GET http://192.168.39.245:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-039717 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-039717 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-039717 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [06c8a51d-43d4-47f7-ab3a-bb13abe69480] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [06c8a51d-43d4-47f7-ab3a-bb13abe69480] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013811996s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-039717 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.245
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-039717 addons disable ingress-dns --alsologtostderr -v=1: (1.37142808s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-039717 addons disable ingress --alsologtostderr -v=1: (7.894000702s)
--- PASS: TestAddons/parallel/Ingress (20.57s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bn52c" [e612e08f-ec46-48d1-88fb-b54359dc9a49] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005588175s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-039717
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-039717: (6.331545685s)
--- PASS: TestAddons/parallel/InspektorGadget (11.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.008248ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-cz4qv" [bc46ad28-12c2-41d5-af7c-437076c9d01b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004550728s
addons_test.go:415: (dbg) Run:  kubectl --context addons-039717 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.81s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.757287ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zw9mh" [9f40b77f-a811-4a91-8327-0162e22fb172] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006170485s
addons_test.go:473: (dbg) Run:  kubectl --context addons-039717 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-039717 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.759934733s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 29.857914ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-039717 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-039717 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [376e236d-7e34-4152-8de9-998adaf82151] Pending
helpers_test.go:344: "task-pv-pod" [376e236d-7e34-4152-8de9-998adaf82151] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [376e236d-7e34-4152-8de9-998adaf82151] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004400446s
addons_test.go:584: (dbg) Run:  kubectl --context addons-039717 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-039717 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-039717 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-039717 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-039717 delete pod task-pv-pod: (1.35235557s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-039717 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-039717 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-039717 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2bc642e9-a543-4d80-bb5b-4efe85e032e6] Pending
helpers_test.go:344: "task-pv-pod-restore" [2bc642e9-a543-4d80-bb5b-4efe85e032e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2bc642e9-a543-4d80-bb5b-4efe85e032e6] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005094521s
addons_test.go:626: (dbg) Run:  kubectl --context addons-039717 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-039717 delete pod task-pv-pod-restore: (1.031997186s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-039717 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-039717 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-039717 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80237419s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.27s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-039717 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-039717 --alsologtostderr -v=1: (2.333430748s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-hqhxc" [811f451e-0d66-4517-936f-5ff6ccd2f851] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-hqhxc" [811f451e-0d66-4517-936f-5ff6ccd2f851] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-hqhxc" [811f451e-0d66-4517-936f-5ff6ccd2f851] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004597615s
--- PASS: TestAddons/parallel/Headlamp (16.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-kcxdj" [385dfd1d-2857-4eae-8022-c54ddae0d281] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003977112s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-039717
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.72s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-039717 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-039717 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [913caf51-614a-4ff2-86c7-5b703bb22403] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [913caf51-614a-4ff2-86c7-5b703bb22403] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [913caf51-614a-4ff2-86c7-5b703bb22403] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00386526s
addons_test.go:891: (dbg) Run:  kubectl --context addons-039717 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 ssh "cat /opt/local-path-provisioner/pvc-a5a0299d-129a-42ce-a0bd-53c08b84d714_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-039717 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-039717 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-039717 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-039717 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.815990374s)
--- PASS: TestAddons/parallel/LocalPath (55.72s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8bp7f" [c0cc6589-59b4-4456-884d-38c9629fba55] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005401321s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-039717
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-kgg44" [4e90916f-9fdc-421e-893a-0c7c7af4de07] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00359043s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-039717 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-039717 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-039717
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-039717: (13.109390947s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-039717
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-039717
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-039717
--- PASS: TestAddons/StoppedEnableDisable (13.42s)

                                                
                                    
x
+
TestCertOptions (77.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-556865 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E0229 18:25:23.977763   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:23.983048   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:23.993343   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:24.013686   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:24.054042   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:24.134426   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:24.294831   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:24.615433   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:25.256473   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:26.536666   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:29.097247   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:34.217862   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:25:44.458457   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
E0229 18:26:00.469998   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 18:26:04.939281   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-556865 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m16.110818131s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-556865 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-556865 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-556865 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-556865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-556865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-556865: (1.045236742s)
--- PASS: TestCertOptions (77.66s)

                                                
                                    
x
+
TestCertExpiration (272.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-325534 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0229 18:24:03.519797   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-325534 --memory=2048 --cert-expiration=3m --driver=kvm2 : (51.13747054s)
E0229 18:25:09.379777   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-325534 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-325534 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (39.931103753s)
helpers_test.go:175: Cleaning up "cert-expiration-325534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-325534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-325534: (1.07800126s)
--- PASS: TestCertExpiration (272.15s)

                                                
                                    
x
+
TestDockerFlags (58.95s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-296466 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-296466 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (56.846848916s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-296466 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-296466 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-296466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-296466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-296466: (1.638417344s)
--- PASS: TestDockerFlags (58.95s)

                                                
                                    
x
+
TestForceSystemdFlag (83.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-817002 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-817002 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m21.847969722s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-817002 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-817002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-817002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-817002: (1.413793937s)
--- PASS: TestForceSystemdFlag (83.52s)

                                                
                                    
x
+
TestForceSystemdEnv (62.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-887530 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-887530 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m1.374566737s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-887530 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-887530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-887530
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-887530: (1.036088163s)
--- PASS: TestForceSystemdEnv (62.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                    
x
+
TestErrorSpam/setup (49.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-539172 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-539172 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-539172 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-539172 --driver=kvm2 : (49.181844057s)
--- PASS: TestErrorSpam/setup (49.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 pause
--- PASS: TestErrorSpam/pause (1.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (3.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 stop: (3.091937357s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-539172 --log_dir /tmp/nospam-539172 stop
--- PASS: TestErrorSpam/stop (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/test/nested/copy/13605/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (103.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-339868 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m43.528924613s)
--- PASS: TestFunctional/serial/StartWithProxy (103.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-339868 --alsologtostderr -v=8: (38.670800477s)
functional_test.go:659: soft start took 38.671456404s for "functional-339868" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-339868 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache add registry.k8s.io/pause:3.1
E0229 17:45:09.379162   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.385182   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.395478   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.415736   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.456801   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.537130   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:09.697553   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:10.018237   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache add registry.k8s.io/pause:3.3
E0229 17:45:10.658716   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-339868 /tmp/TestFunctionalserialCacheCmdcacheadd_local1200133343/001
E0229 17:45:11.939322   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache add minikube-local-cache-test:functional-339868
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache delete minikube-local-cache-test:functional-339868
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-339868
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (244.187628ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0229 17:45:14.499500   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 kubectl -- --context functional-339868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-339868 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0229 17:45:19.620174   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:29.860921   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:45:50.342027   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-339868 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.742467276s)
functional_test.go:757: restart took 38.74257875s for "functional-339868" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-339868 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 logs: (1.069396688s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 logs --file /tmp/TestFunctionalserialLogsFileCmd3055546022/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 logs --file /tmp/TestFunctionalserialLogsFileCmd3055546022/001/logs.txt: (1.090119114s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-339868 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-339868
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-339868: exit status 115 (300.401837ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.114:31878 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-339868 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 config get cpus: exit status 14 (82.584547ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 config get cpus: exit status 14 (57.487666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-339868 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-339868 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21543: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-339868 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (157.205992ms)

                                                
                                                
-- stdout --
	* [functional-339868] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:46:22.326539   21180 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:46:22.326743   21180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:46:22.326756   21180 out.go:304] Setting ErrFile to fd 2...
	I0229 17:46:22.326763   21180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:46:22.327568   21180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:46:22.328432   21180 out.go:298] Setting JSON to false
	I0229 17:46:22.329649   21180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1733,"bootTime":1709227050,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:46:22.329716   21180 start.go:139] virtualization: kvm guest
	I0229 17:46:22.331797   21180 out.go:177] * [functional-339868] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0229 17:46:22.333605   21180 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:46:22.333630   21180 notify.go:220] Checking for updates...
	I0229 17:46:22.334976   21180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:46:22.336437   21180 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:46:22.337700   21180 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:46:22.338914   21180 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:46:22.340426   21180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:46:22.342245   21180 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:46:22.342751   21180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:46:22.342811   21180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:46:22.360354   21180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I0229 17:46:22.360823   21180 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:46:22.361454   21180 main.go:141] libmachine: Using API Version  1
	I0229 17:46:22.361475   21180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:46:22.361904   21180 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:46:22.362211   21180 main.go:141] libmachine: (functional-339868) Calling .DriverName
	I0229 17:46:22.362569   21180 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:46:22.363000   21180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:46:22.363046   21180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:46:22.378632   21180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0229 17:46:22.379086   21180 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:46:22.379569   21180 main.go:141] libmachine: Using API Version  1
	I0229 17:46:22.379594   21180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:46:22.379927   21180 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:46:22.380123   21180 main.go:141] libmachine: (functional-339868) Calling .DriverName
	I0229 17:46:22.418856   21180 out.go:177] * Using the kvm2 driver based on existing profile
	I0229 17:46:22.420225   21180 start.go:299] selected driver: kvm2
	I0229 17:46:22.420253   21180 start.go:903] validating driver "kvm2" against &{Name:functional-339868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-339868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.114 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:46:22.420402   21180 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:46:22.422913   21180 out.go:177] 
	W0229 17:46:22.424297   21180 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0229 17:46:22.425797   21180 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-339868 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-339868 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (163.894568ms)

                                                
                                                
-- stdout --
	* [functional-339868] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 17:46:22.657484   21236 out.go:291] Setting OutFile to fd 1 ...
	I0229 17:46:22.657640   21236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:46:22.657676   21236 out.go:304] Setting ErrFile to fd 2...
	I0229 17:46:22.657693   21236 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:46:22.657975   21236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 17:46:22.658629   21236 out.go:298] Setting JSON to false
	I0229 17:46:22.659836   21236 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1733,"bootTime":1709227050,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0229 17:46:22.659926   21236 start.go:139] virtualization: kvm guest
	I0229 17:46:22.662548   21236 out.go:177] * [functional-339868] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0229 17:46:22.664213   21236 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:46:22.665653   21236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:46:22.664238   21236 notify.go:220] Checking for updates...
	I0229 17:46:22.667257   21236 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	I0229 17:46:22.668761   21236 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	I0229 17:46:22.670138   21236 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0229 17:46:22.671577   21236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:46:22.673263   21236 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:46:22.673810   21236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:46:22.673859   21236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:46:22.694036   21236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46357
	I0229 17:46:22.694516   21236 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:46:22.695115   21236 main.go:141] libmachine: Using API Version  1
	I0229 17:46:22.695146   21236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:46:22.695460   21236 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:46:22.695620   21236 main.go:141] libmachine: (functional-339868) Calling .DriverName
	I0229 17:46:22.695899   21236 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:46:22.696152   21236 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 17:46:22.696193   21236 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 17:46:22.710064   21236 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0229 17:46:22.710477   21236 main.go:141] libmachine: () Calling .GetVersion
	I0229 17:46:22.710955   21236 main.go:141] libmachine: Using API Version  1
	I0229 17:46:22.710985   21236 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 17:46:22.711301   21236 main.go:141] libmachine: () Calling .GetMachineName
	I0229 17:46:22.711469   21236 main.go:141] libmachine: (functional-339868) Calling .DriverName
	I0229 17:46:22.750575   21236 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0229 17:46:22.752024   21236 start.go:299] selected driver: kvm2
	I0229 17:46:22.752041   21236 start.go:903] validating driver "kvm2" against &{Name:functional-339868 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-339868 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.114 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:46:22.752159   21236 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:46:22.754407   21236 out.go:177] 
	W0229 17:46:22.755711   21236 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0229 17:46:22.757021   21236 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-339868 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-339868 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-cqxb9" [61debd75-c985-416f-8dd6-a4fa66769159] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-cqxb9" [61debd75-c985-416f-8dd6-a4fa66769159] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.163187554s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.114:31870
functional_test.go:1671: http://192.168.39.114:31870: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-cqxb9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.114:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.114:31870
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (53.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fcaedecb-ac0f-45f9-83ed-b69419379c2f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005959556s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-339868 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-339868 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-339868 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-339868 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cc707e4f-90be-4dbc-9dfc-33ce61e4217d] Pending
helpers_test.go:344: "sp-pod" [cc707e4f-90be-4dbc-9dfc-33ce61e4217d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cc707e4f-90be-4dbc-9dfc-33ce61e4217d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004756505s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-339868 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-339868 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-339868 delete -f testdata/storage-provisioner/pod.yaml: (2.189156232s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-339868 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [297efb5d-cdd5-4d73-914b-7ef5c0a9883a] Pending
helpers_test.go:344: "sp-pod" [297efb5d-cdd5-4d73-914b-7ef5c0a9883a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [297efb5d-cdd5-4d73-914b-7ef5c0a9883a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.004330933s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-339868 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (53.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh -n functional-339868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cp functional-339868:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1775558403/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh -n functional-339868 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh -n functional-339868 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (33.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-339868 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jljwm" [4fd187fc-8f78-419b-928c-f29428bf8faf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jljwm" [4fd187fc-8f78-419b-928c-f29428bf8faf] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.004101654s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;": exit status 1 (209.456512ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;": exit status 1 (199.655379ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/02/29 17:46:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;": exit status 1 (162.876713ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-339868 exec mysql-859648c796-jljwm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (33.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13605/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /etc/test/nested/copy/13605/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13605.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /etc/ssl/certs/13605.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13605.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /usr/share/ca-certificates/13605.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/136052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /etc/ssl/certs/136052.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/136052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /usr/share/ca-certificates/136052.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-339868 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh "sudo systemctl is-active crio": exit status 1 (228.697387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-339868 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-339868 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zw44x" [25945869-903b-41b6-9fdb-43dec103ad1c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zw44x" [25945869-903b-41b6-9fdb-43dec103ad1c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004736515s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "300.947012ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "67.659922ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "294.341355ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "75.723481ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdany-port780030228/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709228761785095492" to /tmp/TestFunctionalparallelMountCmdany-port780030228/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709228761785095492" to /tmp/TestFunctionalparallelMountCmdany-port780030228/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709228761785095492" to /tmp/TestFunctionalparallelMountCmdany-port780030228/001/test-1709228761785095492
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.445666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 29 17:46 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 29 17:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 29 17:46 test-1709228761785095492
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh cat /mount-9p/test-1709228761785095492
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-339868 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [10b238fa-777f-471d-80f4-b24ad7336165] Pending
helpers_test.go:344: "busybox-mount" [10b238fa-777f-471d-80f4-b24ad7336165] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [10b238fa-777f-471d-80f4-b24ad7336165] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [10b238fa-777f-471d-80f4-b24ad7336165] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00464655s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-339868 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdany-port780030228/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.75s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 version -o=json --components: (1.039122135s)
--- PASS: TestFunctional/parallel/Version/components (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-339868 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-339868
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-339868
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-339868 image ls --format short --alsologtostderr:
I0229 17:46:24.243236   21422 out.go:291] Setting OutFile to fd 1 ...
I0229 17:46:24.243523   21422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:24.243536   21422 out.go:304] Setting ErrFile to fd 2...
I0229 17:46:24.243543   21422 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:24.243864   21422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:46:24.244693   21422 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:24.244857   21422 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:24.245440   21422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:24.245502   21422 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:24.262534   21422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
I0229 17:46:24.263048   21422 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:24.263738   21422 main.go:141] libmachine: Using API Version  1
I0229 17:46:24.263760   21422 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:24.264117   21422 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:24.264282   21422 main.go:141] libmachine: (functional-339868) Calling .GetState
I0229 17:46:24.265914   21422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:24.265962   21422 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:24.280797   21422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
I0229 17:46:24.281211   21422 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:24.281703   21422 main.go:141] libmachine: Using API Version  1
I0229 17:46:24.281728   21422 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:24.282038   21422 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:24.282212   21422 main.go:141] libmachine: (functional-339868) Calling .DriverName
I0229 17:46:24.282415   21422 ssh_runner.go:195] Run: systemctl --version
I0229 17:46:24.282441   21422 main.go:141] libmachine: (functional-339868) Calling .GetSSHHostname
I0229 17:46:24.285121   21422 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:24.285488   21422 main.go:141] libmachine: (functional-339868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e9:18", ip: ""} in network mk-functional-339868: {Iface:virbr1 ExpiryTime:2024-02-29 18:43:01 +0000 UTC Type:0 Mac:52:54:00:87:e9:18 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-339868 Clientid:01:52:54:00:87:e9:18}
I0229 17:46:24.285516   21422 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined IP address 192.168.39.114 and MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:24.285617   21422 main.go:141] libmachine: (functional-339868) Calling .GetSSHPort
I0229 17:46:24.285771   21422 main.go:141] libmachine: (functional-339868) Calling .GetSSHKeyPath
I0229 17:46:24.285925   21422 main.go:141] libmachine: (functional-339868) Calling .GetSSHUsername
I0229 17:46:24.286058   21422 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/functional-339868/id_rsa Username:docker}
I0229 17:46:24.436642   21422 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 17:46:24.503114   21422 main.go:141] libmachine: Making call to close driver server
I0229 17:46:24.503126   21422 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:24.503428   21422 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:24.503443   21422 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:24.503445   21422 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:24.503460   21422 main.go:141] libmachine: Making call to close driver server
I0229 17:46:24.503470   21422 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:24.503702   21422 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:24.503705   21422 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:24.503734   21422 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-339868 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/localhost/my-image                | functional-339868 | bec3b23a304df | 1.24MB |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| gcr.io/google-containers/addon-resizer      | functional-339868 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-339868 | 06e61027bed04 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-339868 image ls --format table --alsologtostderr:
I0229 17:46:28.407633   21609 out.go:291] Setting OutFile to fd 1 ...
I0229 17:46:28.407747   21609 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:28.407754   21609 out.go:304] Setting ErrFile to fd 2...
I0229 17:46:28.407758   21609 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:28.407980   21609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:46:28.408534   21609 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:28.408629   21609 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:28.409012   21609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:28.409054   21609 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:28.423751   21609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
I0229 17:46:28.424250   21609 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:28.424871   21609 main.go:141] libmachine: Using API Version  1
I0229 17:46:28.424900   21609 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:28.425330   21609 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:28.425556   21609 main.go:141] libmachine: (functional-339868) Calling .GetState
I0229 17:46:28.427506   21609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:28.427543   21609 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:28.441851   21609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
I0229 17:46:28.442290   21609 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:28.442762   21609 main.go:141] libmachine: Using API Version  1
I0229 17:46:28.442785   21609 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:28.443160   21609 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:28.443336   21609 main.go:141] libmachine: (functional-339868) Calling .DriverName
I0229 17:46:28.443530   21609 ssh_runner.go:195] Run: systemctl --version
I0229 17:46:28.443552   21609 main.go:141] libmachine: (functional-339868) Calling .GetSSHHostname
I0229 17:46:28.446428   21609 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:28.446834   21609 main.go:141] libmachine: (functional-339868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e9:18", ip: ""} in network mk-functional-339868: {Iface:virbr1 ExpiryTime:2024-02-29 18:43:01 +0000 UTC Type:0 Mac:52:54:00:87:e9:18 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-339868 Clientid:01:52:54:00:87:e9:18}
I0229 17:46:28.446852   21609 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined IP address 192.168.39.114 and MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:28.446992   21609 main.go:141] libmachine: (functional-339868) Calling .GetSSHPort
I0229 17:46:28.447175   21609 main.go:141] libmachine: (functional-339868) Calling .GetSSHKeyPath
I0229 17:46:28.447309   21609 main.go:141] libmachine: (functional-339868) Calling .GetSSHUsername
I0229 17:46:28.447450   21609 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/functional-339868/id_rsa Username:docker}
I0229 17:46:28.534816   21609 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 17:46:28.576229   21609 main.go:141] libmachine: Making call to close driver server
I0229 17:46:28.576251   21609 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:28.576517   21609 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:28.576541   21609 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:28.576552   21609 main.go:141] libmachine: Making call to close driver server
I0229 17:46:28.576561   21609 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:28.576780   21609 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:28.576874   21609 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:28.576920   21609 main.go:141] libmachine: Making call to close connection to plugin binary
E0229 17:46:31.303069   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-339868 image ls --format json --alsologtostderr:
[{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"06e61027bed04b9f77f397a42631c15819bbbf289ae9b28f4452aba7b6bc15fa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-339868"],"size":"30"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51b
aa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-339868"],"size":"32900000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTa
gs":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"bec3b23a304df69229c9799b7f057858277b983eafde3aba91889ce0f234c173","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-339868"],"size":"1240000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-339868 image ls --format json --alsologtostderr:
I0229 17:46:28.187764   21586 out.go:291] Setting OutFile to fd 1 ...
I0229 17:46:28.188009   21586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:28.188018   21586 out.go:304] Setting ErrFile to fd 2...
I0229 17:46:28.188023   21586 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:28.188247   21586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:46:28.188911   21586 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:28.189027   21586 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:28.189390   21586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:28.189431   21586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:28.203912   21586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
I0229 17:46:28.204424   21586 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:28.205000   21586 main.go:141] libmachine: Using API Version  1
I0229 17:46:28.205033   21586 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:28.205371   21586 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:28.205561   21586 main.go:141] libmachine: (functional-339868) Calling .GetState
I0229 17:46:28.207350   21586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:28.207396   21586 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:28.221933   21586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
I0229 17:46:28.222330   21586 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:28.222804   21586 main.go:141] libmachine: Using API Version  1
I0229 17:46:28.222829   21586 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:28.223230   21586 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:28.223449   21586 main.go:141] libmachine: (functional-339868) Calling .DriverName
I0229 17:46:28.223707   21586 ssh_runner.go:195] Run: systemctl --version
I0229 17:46:28.223733   21586 main.go:141] libmachine: (functional-339868) Calling .GetSSHHostname
I0229 17:46:28.226565   21586 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:28.227031   21586 main.go:141] libmachine: (functional-339868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e9:18", ip: ""} in network mk-functional-339868: {Iface:virbr1 ExpiryTime:2024-02-29 18:43:01 +0000 UTC Type:0 Mac:52:54:00:87:e9:18 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-339868 Clientid:01:52:54:00:87:e9:18}
I0229 17:46:28.227068   21586 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined IP address 192.168.39.114 and MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:28.227215   21586 main.go:141] libmachine: (functional-339868) Calling .GetSSHPort
I0229 17:46:28.227385   21586 main.go:141] libmachine: (functional-339868) Calling .GetSSHKeyPath
I0229 17:46:28.227510   21586 main.go:141] libmachine: (functional-339868) Calling .GetSSHUsername
I0229 17:46:28.227670   21586 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/functional-339868/id_rsa Username:docker}
I0229 17:46:28.314571   21586 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 17:46:28.349001   21586 main.go:141] libmachine: Making call to close driver server
I0229 17:46:28.349016   21586 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:28.349297   21586 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:28.349413   21586 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:28.349463   21586 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:28.349481   21586 main.go:141] libmachine: Making call to close driver server
I0229 17:46:28.349490   21586 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:28.349709   21586 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:28.349727   21586 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:28.349749   21586 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-339868 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 06e61027bed04b9f77f397a42631c15819bbbf289ae9b28f4452aba7b6bc15fa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-339868
size: "30"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-339868
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-339868 image ls --format yaml --alsologtostderr:
I0229 17:46:24.567320   21464 out.go:291] Setting OutFile to fd 1 ...
I0229 17:46:24.567503   21464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:24.567515   21464 out.go:304] Setting ErrFile to fd 2...
I0229 17:46:24.567520   21464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:24.567980   21464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:46:24.569597   21464 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:24.569722   21464 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:24.570177   21464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:24.570221   21464 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:24.585021   21464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
I0229 17:46:24.585596   21464 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:24.586199   21464 main.go:141] libmachine: Using API Version  1
I0229 17:46:24.586224   21464 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:24.586620   21464 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:24.586829   21464 main.go:141] libmachine: (functional-339868) Calling .GetState
I0229 17:46:24.588720   21464 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:24.588766   21464 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:24.603458   21464 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
I0229 17:46:24.603924   21464 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:24.604421   21464 main.go:141] libmachine: Using API Version  1
I0229 17:46:24.604442   21464 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:24.604903   21464 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:24.605129   21464 main.go:141] libmachine: (functional-339868) Calling .DriverName
I0229 17:46:24.605382   21464 ssh_runner.go:195] Run: systemctl --version
I0229 17:46:24.605412   21464 main.go:141] libmachine: (functional-339868) Calling .GetSSHHostname
I0229 17:46:24.608113   21464 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:24.608544   21464 main.go:141] libmachine: (functional-339868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e9:18", ip: ""} in network mk-functional-339868: {Iface:virbr1 ExpiryTime:2024-02-29 18:43:01 +0000 UTC Type:0 Mac:52:54:00:87:e9:18 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-339868 Clientid:01:52:54:00:87:e9:18}
I0229 17:46:24.608578   21464 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined IP address 192.168.39.114 and MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:24.608692   21464 main.go:141] libmachine: (functional-339868) Calling .GetSSHPort
I0229 17:46:24.608865   21464 main.go:141] libmachine: (functional-339868) Calling .GetSSHKeyPath
I0229 17:46:24.609044   21464 main.go:141] libmachine: (functional-339868) Calling .GetSSHUsername
I0229 17:46:24.609180   21464 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/functional-339868/id_rsa Username:docker}
I0229 17:46:24.733315   21464 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0229 17:46:24.840816   21464 main.go:141] libmachine: Making call to close driver server
I0229 17:46:24.840833   21464 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:24.841093   21464 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:24.841112   21464 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:24.841121   21464 main.go:141] libmachine: Making call to close driver server
I0229 17:46:24.841127   21464 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:24.841384   21464 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:24.841398   21464 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh pgrep buildkitd: exit status 1 (209.509854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image build -t localhost/my-image:functional-339868 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image build -t localhost/my-image:functional-339868 testdata/build --alsologtostderr: (2.838853991s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-339868 image build -t localhost/my-image:functional-339868 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7a4ef95f6ccd
Removing intermediate container 7a4ef95f6ccd
---> 801d9a067cf9
Step 3/3 : ADD content.txt /
---> bec3b23a304d
Successfully built bec3b23a304d
Successfully tagged localhost/my-image:functional-339868
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-339868 image build -t localhost/my-image:functional-339868 testdata/build --alsologtostderr:
I0229 17:46:25.116374   21520 out.go:291] Setting OutFile to fd 1 ...
I0229 17:46:25.116502   21520 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:25.116514   21520 out.go:304] Setting ErrFile to fd 2...
I0229 17:46:25.116519   21520 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:46:25.116714   21520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:46:25.117293   21520 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:25.117797   21520 config.go:182] Loaded profile config "functional-339868": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 17:46:25.118173   21520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:25.118222   21520 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:25.133292   21520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
I0229 17:46:25.133725   21520 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:25.134236   21520 main.go:141] libmachine: Using API Version  1
I0229 17:46:25.134259   21520 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:25.134631   21520 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:25.134841   21520 main.go:141] libmachine: (functional-339868) Calling .GetState
I0229 17:46:25.136773   21520 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:46:25.136824   21520 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:46:25.152261   21520 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
I0229 17:46:25.152652   21520 main.go:141] libmachine: () Calling .GetVersion
I0229 17:46:25.153154   21520 main.go:141] libmachine: Using API Version  1
I0229 17:46:25.153184   21520 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:46:25.153498   21520 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:46:25.153686   21520 main.go:141] libmachine: (functional-339868) Calling .DriverName
I0229 17:46:25.153897   21520 ssh_runner.go:195] Run: systemctl --version
I0229 17:46:25.153918   21520 main.go:141] libmachine: (functional-339868) Calling .GetSSHHostname
I0229 17:46:25.156488   21520 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:25.156882   21520 main.go:141] libmachine: (functional-339868) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:e9:18", ip: ""} in network mk-functional-339868: {Iface:virbr1 ExpiryTime:2024-02-29 18:43:01 +0000 UTC Type:0 Mac:52:54:00:87:e9:18 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:functional-339868 Clientid:01:52:54:00:87:e9:18}
I0229 17:46:25.156907   21520 main.go:141] libmachine: (functional-339868) DBG | domain functional-339868 has defined IP address 192.168.39.114 and MAC address 52:54:00:87:e9:18 in network mk-functional-339868
I0229 17:46:25.157072   21520 main.go:141] libmachine: (functional-339868) Calling .GetSSHPort
I0229 17:46:25.157252   21520 main.go:141] libmachine: (functional-339868) Calling .GetSSHKeyPath
I0229 17:46:25.157400   21520 main.go:141] libmachine: (functional-339868) Calling .GetSSHUsername
I0229 17:46:25.157551   21520 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/functional-339868/id_rsa Username:docker}
I0229 17:46:25.294438   21520 build_images.go:151] Building image from path: /tmp/build.2569889154.tar
I0229 17:46:25.294501   21520 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 17:46:25.315591   21520 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2569889154.tar
I0229 17:46:25.326402   21520 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2569889154.tar: stat -c "%s %y" /var/lib/minikube/build/build.2569889154.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2569889154.tar': No such file or directory
I0229 17:46:25.326451   21520 ssh_runner.go:362] scp /tmp/build.2569889154.tar --> /var/lib/minikube/build/build.2569889154.tar (3072 bytes)
I0229 17:46:25.409465   21520 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2569889154
I0229 17:46:25.429103   21520 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2569889154 -xf /var/lib/minikube/build/build.2569889154.tar
I0229 17:46:25.451892   21520 docker.go:360] Building image: /var/lib/minikube/build/build.2569889154
I0229 17:46:25.451961   21520 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-339868 /var/lib/minikube/build/build.2569889154
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0229 17:46:27.870847   21520 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-339868 /var/lib/minikube/build/build.2569889154: (2.418853726s)
I0229 17:46:27.870919   21520 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2569889154
I0229 17:46:27.883876   21520 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2569889154.tar
I0229 17:46:27.896062   21520 build_images.go:207] Built localhost/my-image:functional-339868 from /tmp/build.2569889154.tar
I0229 17:46:27.896096   21520 build_images.go:123] succeeded building to: functional-339868
I0229 17:46:27.896102   21520 build_images.go:124] failed building to: 
I0229 17:46:27.896133   21520 main.go:141] libmachine: Making call to close driver server
I0229 17:46:27.896146   21520 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:27.896401   21520 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:27.896421   21520 main.go:141] libmachine: Making call to close connection to plugin binary
I0229 17:46:27.896431   21520 main.go:141] libmachine: Making call to close driver server
I0229 17:46:27.896430   21520 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:27.896440   21520 main.go:141] libmachine: (functional-339868) Calling .Close
I0229 17:46:27.896664   21520 main.go:141] libmachine: (functional-339868) DBG | Closing plugin on server side
I0229 17:46:27.896673   21520 main.go:141] libmachine: Successfully made call to close driver server
I0229 17:46:27.896683   21520 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.255382366s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-339868
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr: (4.118407325s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr: (2.423240026s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdspecific-port3444708873/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.842028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdspecific-port3444708873/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh "sudo umount -f /mount-9p": exit status 1 (266.874371ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-339868 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdspecific-port3444708873/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.185549534s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-339868
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image load --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr: (5.024745384s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service list -o json
functional_test.go:1490: Took "323.253995ms" to run "out/minikube-linux-amd64 -p functional-339868 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.114:32536
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T" /mount1: exit status 1 (283.82873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-339868 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-339868 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1579038180/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.114:32536
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-339868 docker-env) && out/minikube-linux-amd64 status -p functional-339868"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-339868 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image save gcr.io/google-containers/addon-resizer:functional-339868 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image save gcr.io/google-containers/addon-resizer:functional-339868 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.384660385s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image rm gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.387871051s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-339868
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-339868 image save --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-339868 image save --daemon gcr.io/google-containers/addon-resizer:functional-339868 --alsologtostderr: (1.993917999s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-339868
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.03s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-339868
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-339868
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-339868
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (254.26s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-859306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0229 18:26:45.899768   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-859306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (56.830610283s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-859306 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-859306 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.659923548s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-859306 addons enable gvisor
E0229 18:28:07.820025   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-859306 addons enable gvisor: (3.892257221s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [88a448e0-6aa7-4176-a404-258d9d30d870] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.005965415s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-859306 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [16255bf3-fd56-43b3-86dd-2542de95a06c] Pending
helpers_test.go:344: "nginx-gvisor" [16255bf3-fd56-43b3-86dd-2542de95a06c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [16255bf3-fd56-43b3-86dd-2542de95a06c] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 13.005087109s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-859306
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-859306: (1m32.338625168s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-859306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0229 18:30:09.379419   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-859306 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (45.074504481s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [88a448e0-6aa7-4176-a404-258d9d30d870] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.013512602s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [16255bf3-fd56-43b3-86dd-2542de95a06c] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0229 18:30:51.660716   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.007357676s
helpers_test.go:175: Cleaning up "gvisor-859306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-859306
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-859306: (1.214530579s)
--- PASS: TestGvisorAddon (254.26s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (47.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-179304 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-179304 --driver=kvm2 : (47.98669631s)
--- PASS: TestImageBuild/serial/Setup (47.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-179304
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-179304: (1.587682271s)
--- PASS: TestImageBuild/serial/NormalBuild (1.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-179304
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-179304: (1.048933256s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-179304
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-179304
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-785043 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-785043 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m4.579137336s)
--- PASS: TestJSONOutput/start/Command (64.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-785043 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-785043 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-785043 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-785043 --output=json --user=testUser: (13.10901333s)
--- PASS: TestJSONOutput/stop/Command (13.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-505490 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-505490 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.453489ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a7cc83c0-c549-4970-b73e-5dd4ec83c20c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-505490] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"142bbab8-cff2-4f23-b99a-c5a1d51472a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"df0b16d7-15bf-4a6d-b416-82e4ae0c973a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e54110be-4c28-4dc1-ade0-e9a637caacbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig"}}
	{"specversion":"1.0","id":"bf27f91e-c942-4365-80fa-c9415c42d820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube"}}
	{"specversion":"1.0","id":"d8b407f8-7a96-4953-8347-24f3887b4768","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6ed06767-0ddc-48d8-b400-4de09ddee062","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b509028-d485-4c7a-921c-e38545348cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-505490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-505490
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (104.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-964514 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-964514 --driver=kvm2 : (50.531354457s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-967121 --driver=kvm2 
E0229 18:00:09.379716   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-967121 --driver=kvm2 : (51.465259687s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-964514
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-967121
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-967121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-967121
helpers_test.go:175: Cleaning up "first-964514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-964514
--- PASS: TestMinikubeProfile (104.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-199000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0229 18:01:00.469361   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-199000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.019881609s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-199000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-199000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-218146 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0229 18:01:32.427029   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-218146 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.72370594s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-199000 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-218146
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-218146: (2.093818853s)
--- PASS: TestMountStart/serial/Stop (2.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-218146
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-218146: (23.078761436s)
--- PASS: TestMountStart/serial/RestartStopped (24.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-218146 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (122.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589829 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589829 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m2.107269315s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (122.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-589829 -- rollout status deployment/busybox: (2.749130834s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kkrdn -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kpq8p -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kkrdn -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kpq8p -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kkrdn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kpq8p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kkrdn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kkrdn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kpq8p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-589829 -- exec busybox-5b5d89c9d6-kpq8p -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-589829 -v 3 --alsologtostderr
E0229 18:05:09.379619   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-589829 -v 3 --alsologtostderr: (45.062047804s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-589829 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp testdata/cp-test.txt multinode-589829:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996119793/001/cp-test_multinode-589829.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829:/home/docker/cp-test.txt multinode-589829-m02:/home/docker/cp-test_multinode-589829_multinode-589829-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test_multinode-589829_multinode-589829-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829:/home/docker/cp-test.txt multinode-589829-m03:/home/docker/cp-test_multinode-589829_multinode-589829-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test_multinode-589829_multinode-589829-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp testdata/cp-test.txt multinode-589829-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996119793/001/cp-test_multinode-589829-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m02:/home/docker/cp-test.txt multinode-589829:/home/docker/cp-test_multinode-589829-m02_multinode-589829.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test_multinode-589829-m02_multinode-589829.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m02:/home/docker/cp-test.txt multinode-589829-m03:/home/docker/cp-test_multinode-589829-m02_multinode-589829-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test_multinode-589829-m02_multinode-589829-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp testdata/cp-test.txt multinode-589829-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996119793/001/cp-test_multinode-589829-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m03:/home/docker/cp-test.txt multinode-589829:/home/docker/cp-test_multinode-589829-m03_multinode-589829.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829 "sudo cat /home/docker/cp-test_multinode-589829-m03_multinode-589829.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 cp multinode-589829-m03:/home/docker/cp-test.txt multinode-589829-m02:/home/docker/cp-test_multinode-589829-m03_multinode-589829-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 ssh -n multinode-589829-m02 "sudo cat /home/docker/cp-test_multinode-589829-m03_multinode-589829-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-589829 node stop m03: (2.355700537s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589829 status: exit status 7 (428.320716ms)

                                                
                                                
-- stdout --
	multinode-589829
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589829-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589829-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr: exit status 7 (434.359496ms)

                                                
                                                
-- stdout --
	multinode-589829
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-589829-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-589829-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:05:28.053257   29465 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:05:28.053537   29465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:05:28.053548   29465 out.go:304] Setting ErrFile to fd 2...
	I0229 18:05:28.053552   29465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:05:28.053760   29465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:05:28.053918   29465 out.go:298] Setting JSON to false
	I0229 18:05:28.053940   29465 mustload.go:65] Loading cluster: multinode-589829
	I0229 18:05:28.053977   29465 notify.go:220] Checking for updates...
	I0229 18:05:28.054280   29465 config.go:182] Loaded profile config "multinode-589829": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:05:28.054301   29465 status.go:255] checking status of multinode-589829 ...
	I0229 18:05:28.054679   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.054741   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.071963   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36321
	I0229 18:05:28.072481   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.073186   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.073215   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.073556   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.073739   29465 main.go:141] libmachine: (multinode-589829) Calling .GetState
	I0229 18:05:28.075459   29465 status.go:330] multinode-589829 host status = "Running" (err=<nil>)
	I0229 18:05:28.075485   29465 host.go:66] Checking if "multinode-589829" exists ...
	I0229 18:05:28.075798   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.075831   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.090441   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42085
	I0229 18:05:28.090824   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.091297   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.091317   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.091576   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.091760   29465 main.go:141] libmachine: (multinode-589829) Calling .GetIP
	I0229 18:05:28.094487   29465 main.go:141] libmachine: (multinode-589829) DBG | domain multinode-589829 has defined MAC address 52:54:00:c8:ab:ea in network mk-multinode-589829
	I0229 18:05:28.094909   29465 main.go:141] libmachine: (multinode-589829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:ab:ea", ip: ""} in network mk-multinode-589829: {Iface:virbr1 ExpiryTime:2024-02-29 19:02:38 +0000 UTC Type:0 Mac:52:54:00:c8:ab:ea Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-589829 Clientid:01:52:54:00:c8:ab:ea}
	I0229 18:05:28.094939   29465 main.go:141] libmachine: (multinode-589829) DBG | domain multinode-589829 has defined IP address 192.168.39.173 and MAC address 52:54:00:c8:ab:ea in network mk-multinode-589829
	I0229 18:05:28.095048   29465 host.go:66] Checking if "multinode-589829" exists ...
	I0229 18:05:28.095371   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.095422   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.109995   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I0229 18:05:28.110419   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.110828   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.110848   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.111199   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.111397   29465 main.go:141] libmachine: (multinode-589829) Calling .DriverName
	I0229 18:05:28.111558   29465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:05:28.111579   29465 main.go:141] libmachine: (multinode-589829) Calling .GetSSHHostname
	I0229 18:05:28.114284   29465 main.go:141] libmachine: (multinode-589829) DBG | domain multinode-589829 has defined MAC address 52:54:00:c8:ab:ea in network mk-multinode-589829
	I0229 18:05:28.114715   29465 main.go:141] libmachine: (multinode-589829) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:ab:ea", ip: ""} in network mk-multinode-589829: {Iface:virbr1 ExpiryTime:2024-02-29 19:02:38 +0000 UTC Type:0 Mac:52:54:00:c8:ab:ea Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:multinode-589829 Clientid:01:52:54:00:c8:ab:ea}
	I0229 18:05:28.114740   29465 main.go:141] libmachine: (multinode-589829) DBG | domain multinode-589829 has defined IP address 192.168.39.173 and MAC address 52:54:00:c8:ab:ea in network mk-multinode-589829
	I0229 18:05:28.114902   29465 main.go:141] libmachine: (multinode-589829) Calling .GetSSHPort
	I0229 18:05:28.115050   29465 main.go:141] libmachine: (multinode-589829) Calling .GetSSHKeyPath
	I0229 18:05:28.115221   29465 main.go:141] libmachine: (multinode-589829) Calling .GetSSHUsername
	I0229 18:05:28.115351   29465 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/multinode-589829/id_rsa Username:docker}
	I0229 18:05:28.191723   29465 ssh_runner.go:195] Run: systemctl --version
	I0229 18:05:28.198426   29465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:05:28.216160   29465 kubeconfig.go:92] found "multinode-589829" server: "https://192.168.39.173:8443"
	I0229 18:05:28.216183   29465 api_server.go:166] Checking apiserver status ...
	I0229 18:05:28.216212   29465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:05:28.232428   29465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1933/cgroup
	W0229 18:05:28.244554   29465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1933/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:05:28.244594   29465 ssh_runner.go:195] Run: ls
	I0229 18:05:28.249564   29465 api_server.go:253] Checking apiserver healthz at https://192.168.39.173:8443/healthz ...
	I0229 18:05:28.256365   29465 api_server.go:279] https://192.168.39.173:8443/healthz returned 200:
	ok
	I0229 18:05:28.256399   29465 status.go:421] multinode-589829 apiserver status = Running (err=<nil>)
	I0229 18:05:28.256409   29465 status.go:257] multinode-589829 status: &{Name:multinode-589829 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:05:28.256428   29465 status.go:255] checking status of multinode-589829-m02 ...
	I0229 18:05:28.256735   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.256773   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.271541   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0229 18:05:28.271987   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.272461   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.272486   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.272775   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.272959   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetState
	I0229 18:05:28.274466   29465 status.go:330] multinode-589829-m02 host status = "Running" (err=<nil>)
	I0229 18:05:28.274479   29465 host.go:66] Checking if "multinode-589829-m02" exists ...
	I0229 18:05:28.274735   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.274767   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.289228   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42695
	I0229 18:05:28.289579   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.290037   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.290076   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.290363   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.290549   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetIP
	I0229 18:05:28.293073   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | domain multinode-589829-m02 has defined MAC address 52:54:00:c6:15:e3 in network mk-multinode-589829
	I0229 18:05:28.293439   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:15:e3", ip: ""} in network mk-multinode-589829: {Iface:virbr1 ExpiryTime:2024-02-29 19:03:50 +0000 UTC Type:0 Mac:52:54:00:c6:15:e3 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-589829-m02 Clientid:01:52:54:00:c6:15:e3}
	I0229 18:05:28.293465   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | domain multinode-589829-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:c6:15:e3 in network mk-multinode-589829
	I0229 18:05:28.293600   29465 host.go:66] Checking if "multinode-589829-m02" exists ...
	I0229 18:05:28.293872   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.293909   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.308161   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38203
	I0229 18:05:28.308524   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.308938   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.308962   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.309270   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.309436   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .DriverName
	I0229 18:05:28.309609   29465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 18:05:28.309627   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetSSHHostname
	I0229 18:05:28.312030   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | domain multinode-589829-m02 has defined MAC address 52:54:00:c6:15:e3 in network mk-multinode-589829
	I0229 18:05:28.312394   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:15:e3", ip: ""} in network mk-multinode-589829: {Iface:virbr1 ExpiryTime:2024-02-29 19:03:50 +0000 UTC Type:0 Mac:52:54:00:c6:15:e3 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-589829-m02 Clientid:01:52:54:00:c6:15:e3}
	I0229 18:05:28.312419   29465 main.go:141] libmachine: (multinode-589829-m02) DBG | domain multinode-589829-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:c6:15:e3 in network mk-multinode-589829
	I0229 18:05:28.312532   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetSSHPort
	I0229 18:05:28.312683   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetSSHKeyPath
	I0229 18:05:28.312814   29465 main.go:141] libmachine: (multinode-589829-m02) Calling .GetSSHUsername
	I0229 18:05:28.312931   29465 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/multinode-589829-m02/id_rsa Username:docker}
	I0229 18:05:28.399151   29465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:05:28.415043   29465 status.go:257] multinode-589829-m02 status: &{Name:multinode-589829-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:05:28.415092   29465 status.go:255] checking status of multinode-589829-m03 ...
	I0229 18:05:28.415411   29465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:05:28.415447   29465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:05:28.431229   29465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0229 18:05:28.431664   29465 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:05:28.432069   29465 main.go:141] libmachine: Using API Version  1
	I0229 18:05:28.432088   29465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:05:28.432424   29465 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:05:28.432614   29465 main.go:141] libmachine: (multinode-589829-m03) Calling .GetState
	I0229 18:05:28.434152   29465 status.go:330] multinode-589829-m03 host status = "Stopped" (err=<nil>)
	I0229 18:05:28.434167   29465 status.go:343] host is not running, skipping remaining checks
	I0229 18:05:28.434174   29465 status.go:257] multinode-589829-m03 status: &{Name:multinode-589829-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-589829 node start m03 --alsologtostderr: (24.317926506s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (159.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589829
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-589829
E0229 18:06:00.470215   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-589829: (27.762241109s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589829 --wait=true -v=8 --alsologtostderr
E0229 18:07:23.519495   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589829 --wait=true -v=8 --alsologtostderr: (2m11.72739696s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589829
--- PASS: TestMultiNode/serial/RestartKeepsNodes (159.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-589829 node delete m03: (1.167873412s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 stop
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-589829 stop: (25.369649974s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589829 status: exit status 7 (93.259979ms)

                                                
                                                
-- stdout --
	multinode-589829
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589829-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr: exit status 7 (91.376981ms)

                                                
                                                
-- stdout --
	multinode-589829
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-589829-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0229 18:09:00.217312   30821 out.go:291] Setting OutFile to fd 1 ...
	I0229 18:09:00.217419   30821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:09:00.217441   30821 out.go:304] Setting ErrFile to fd 2...
	I0229 18:09:00.217448   30821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:09:00.217668   30821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
	I0229 18:09:00.217836   30821 out.go:298] Setting JSON to false
	I0229 18:09:00.217860   30821 mustload.go:65] Loading cluster: multinode-589829
	I0229 18:09:00.217979   30821 notify.go:220] Checking for updates...
	I0229 18:09:00.218357   30821 config.go:182] Loaded profile config "multinode-589829": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:09:00.218380   30821 status.go:255] checking status of multinode-589829 ...
	I0229 18:09:00.218956   30821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:09:00.219000   30821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:00.234199   30821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45597
	I0229 18:09:00.234694   30821 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:00.235217   30821 main.go:141] libmachine: Using API Version  1
	I0229 18:09:00.235245   30821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:00.235781   30821 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:00.235977   30821 main.go:141] libmachine: (multinode-589829) Calling .GetState
	I0229 18:09:00.237671   30821 status.go:330] multinode-589829 host status = "Stopped" (err=<nil>)
	I0229 18:09:00.237684   30821 status.go:343] host is not running, skipping remaining checks
	I0229 18:09:00.237689   30821 status.go:257] multinode-589829 status: &{Name:multinode-589829 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 18:09:00.237710   30821 status.go:255] checking status of multinode-589829-m02 ...
	I0229 18:09:00.238004   30821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0229 18:09:00.238055   30821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0229 18:09:00.252014   30821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33853
	I0229 18:09:00.252398   30821 main.go:141] libmachine: () Calling .GetVersion
	I0229 18:09:00.252844   30821 main.go:141] libmachine: Using API Version  1
	I0229 18:09:00.252873   30821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0229 18:09:00.253161   30821 main.go:141] libmachine: () Calling .GetMachineName
	I0229 18:09:00.253321   30821 main.go:141] libmachine: (multinode-589829-m02) Calling .GetState
	I0229 18:09:00.254702   30821 status.go:330] multinode-589829-m02 host status = "Stopped" (err=<nil>)
	I0229 18:09:00.254716   30821 status.go:343] host is not running, skipping remaining checks
	I0229 18:09:00.254724   30821 status.go:257] multinode-589829-m02 status: &{Name:multinode-589829-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (169.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589829 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0229 18:10:09.380065   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 18:11:00.469842   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589829 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m48.825558219s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-589829 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (169.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-589829
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589829-m02 --driver=kvm2 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-589829-m02 --driver=kvm2 : exit status 14 (82.788256ms)

                                                
                                                
-- stdout --
	* [multinode-589829-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-589829-m02' is duplicated with machine name 'multinode-589829-m02' in profile 'multinode-589829'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-589829-m03 --driver=kvm2 
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-589829-m03 --driver=kvm2 : (47.578232546s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-589829
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-589829: exit status 80 (228.224906ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-589829
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-589829-m03 already exists in multinode-589829-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-589829-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.93s)

                                                
                                    
x
+
TestPreload (206.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-065742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-065742 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m10.144359527s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-065742 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-065742 image pull gcr.io/k8s-minikube/busybox: (1.222659781s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-065742
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-065742: (13.112547268s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-065742 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0229 18:15:09.379763   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 18:16:00.470422   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-065742 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m0.907662327s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-065742 image list
helpers_test.go:175: Cleaning up "test-preload-065742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-065742
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-065742: (1.029761407s)
--- PASS: TestPreload (206.63s)

                                                
                                    
x
+
TestScheduledStopUnix (122.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-099533 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-099533 --memory=2048 --driver=kvm2 : (50.973757023s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-099533 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-099533 -n scheduled-stop-099533
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-099533 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-099533 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-099533 -n scheduled-stop-099533
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-099533
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-099533 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-099533
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-099533: exit status 7 (75.433359ms)

                                                
                                                
-- stdout --
	scheduled-stop-099533
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-099533 -n scheduled-stop-099533
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-099533 -n scheduled-stop-099533: exit status 7 (75.362098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-099533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-099533
--- PASS: TestScheduledStopUnix (122.68s)

                                                
                                    
x
+
TestSkaffold (146.77s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2094868687 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-267594 --memory=2600 --driver=kvm2 
E0229 18:18:12.427789   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-267594 --memory=2600 --driver=kvm2 : (48.236498961s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2094868687 run --minikube-profile skaffold-267594 --kube-context skaffold-267594 --status-check=true --port-forward=false --interactive=false
E0229 18:20:09.379777   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2094868687 run --minikube-profile skaffold-267594 --kube-context skaffold-267594 --status-check=true --port-forward=false --interactive=false: (1m25.587361964s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7f9d5d788-hlxgr" [1d7a3a8c-b9d1-4e94-80e9-9e5b649e4b81] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004151175s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-586f46bfd8-rbn4t" [0673ef97-4f82-48ca-9dc6-117b9e23a3a0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003986522s
helpers_test.go:175: Cleaning up "skaffold-267594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-267594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-267594: (1.199073782s)
--- PASS: TestSkaffold (146.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (226.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3687652208 start -p running-upgrade-799804 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3687652208 start -p running-upgrade-799804 --memory=2200 --vm-driver=kvm2 : (2m17.381825445s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-799804 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-799804 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m27.528291652s)
helpers_test.go:175: Cleaning up "running-upgrade-799804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-799804
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-799804: (1.185443545s)
--- PASS: TestRunningBinaryUpgrade (226.83s)

                                                
                                    
x
+
TestPause/serial/Start (117.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-398168 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E0229 18:21:00.470088   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-398168 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m57.444066603s)
--- PASS: TestPause/serial/Start (117.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (76.717169ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-960195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (88.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-960195 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-960195 --driver=kvm2 : (1m28.075205508s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-960195 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (88.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (79.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-398168 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-398168 --alsologtostderr -v=1 --driver=kvm2 : (1m19.546081498s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (79.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --driver=kvm2 : (10.332199167s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-960195 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-960195 status -o json: exit status 2 (282.691732ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-960195","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-960195
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-960195 --no-kubernetes --driver=kvm2 : (30.436350289s)
--- PASS: TestNoKubernetes/serial/Start (30.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-398168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-398168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-398168 --output=json --layout=cluster: exit status 2 (244.503742ms)

                                                
                                                
-- stdout --
	{"Name":"pause-398168","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-398168","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-398168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-398168 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-960195 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-960195 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.716088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (21.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (21.014293703s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (21.65s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-398168 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-398168 --alsologtostderr -v=5: (1.030725907s)
--- PASS: TestPause/serial/DeletePaused (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-960195
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-960195: (2.392656794s)
--- PASS: TestNoKubernetes/serial/Stop (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (54.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-960195 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-960195 --driver=kvm2 : (54.251669422s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (54.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-960195 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-960195 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.58513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4192298106 start -p stopped-upgrade-338754 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4192298106 start -p stopped-upgrade-338754 --memory=2200 --vm-driver=kvm2 : (57.06354547s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4192298106 -p stopped-upgrade-338754 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4192298106 -p stopped-upgrade-338754 stop: (13.178292589s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-338754 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-338754 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m22.962378376s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-338754
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-338754: (1.74374482s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m43.323998593s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m26.88614274s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (139.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m19.515177244s)
--- PASS: TestNetworkPlugins/group/calico/Start (139.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lfghq" [7ed28a3a-676b-47c6-8da9-558ba7ee0ffe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lfghq" [7ed28a3a-676b-47c6-8da9-558ba7ee0ffe] Running
E0229 18:30:23.977621   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004351182s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nzbs7" [33f4032e-0a0f-4900-aacc-41588100e93e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007000563s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pb8tm" [4348bf28-59e4-4392-89fd-97d675d6dbca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pb8tm" [4348bf28-59e4-4392-89fd-97d675d6dbca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006201322s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m19.081698075s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (122.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m2.991618962s)
--- PASS: TestNetworkPlugins/group/false/Start (122.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (144.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0229 18:31:00.470239   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (2m24.612468885s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (144.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8mbcn" [f6cdf6e2-57d3-4ee7-ae61-5f3710ea769d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006873097s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c84jn" [231ecaf7-72a6-45f0-aef9-34471157d079] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c84jn" [231ecaf7-72a6-45f0-aef9-34471157d079] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004437058s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (95.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m35.434450883s)
--- PASS: TestNetworkPlugins/group/flannel/Start (95.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-twwgf" [3adb658f-21d7-4726-b56b-94c7855998ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-twwgf" [3adb658f-21d7-4726-b56b-94c7855998ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004774745s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (110.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m50.906645114s)
--- PASS: TestNetworkPlugins/group/bridge/Start (110.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b76b9" [b4613e69-6ba7-40b7-8547-f70337a191fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-b76b9" [b4613e69-6ba7-40b7-8547-f70337a191fe] Running
E0229 18:33:08.804751   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:08.810058   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:08.820340   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:08.840681   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:08.881017   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:08.961359   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:09.122259   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:09.442633   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:10.083186   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:33:11.364113   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.00821955s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wzcc7" [17b16a6d-5dd6-43c8-aed8-ec878793d9b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 18:33:29.285525   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wzcc7" [17b16a6d-5dd6-43c8-aed8-ec878793d9b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005171941s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (107.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-911469 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m47.839406356s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (107.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6gvzw" [a8fc8283-28af-4e3d-b793-55a2bd553323] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004759691s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zjcf7" [70f97ba0-56f3-4de1-8a1c-838b9b4024f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zjcf7" [70f97ba0-56f3-4de1-8a1c-838b9b4024f6] Running
E0229 18:33:49.766701   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004359666s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (146.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-580872 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-580872 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (2m26.401382485s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (146.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xm24w" [c07402e3-eaea-4caf-90cf-46fdef047afd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 18:34:30.726926   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xm24w" [c07402e3-eaea-4caf-90cf-46fdef047afd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004690511s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-154269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E0229 18:35:09.379324   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 18:35:17.206523   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.211801   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.222065   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.242379   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.282663   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.363356   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.523788   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:17.843899   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:18.484660   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-154269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (1m47.252250034s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-911469 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-911469 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-n5fl6" [5a4a4813-6b33-4e22-9aa8-6598ad80169b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 18:35:19.764859   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:22.325617   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:23.103191   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.108458   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.118772   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.139119   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.179503   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.259915   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.420313   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.741166   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:23.977447   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/skaffold-267594/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-n5fl6" [5a4a4813-6b33-4e22-9aa8-6598ad80169b] Running
E0229 18:35:24.381499   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:25.662662   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:35:27.445776   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:35:28.223210   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.006456649s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-911469 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-911469 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.19s)
E0229 18:43:01.104311   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:43:02.831321   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kubenet-911469/client.crt: no such file or directory
E0229 18:43:08.805494   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:43:24.374176   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:43:28.788709   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:43:32.449304   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-270866 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E0229 18:35:52.647894   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:35:58.166977   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:36:00.469470   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 18:36:04.066549   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:36:18.619981   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.625319   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.635674   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.656036   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.696329   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.777057   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:18.937663   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:19.258281   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:19.899020   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:21.179477   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:23.739633   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:28.860202   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-270866 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m11.112997959s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-580872 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72ff9b04-cd51-4780-b23a-e25b570240d6] Pending
helpers_test.go:344: "busybox" [72ff9b04-cd51-4780-b23a-e25b570240d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0229 18:36:39.100396   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:36:39.127682   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
helpers_test.go:344: "busybox" [72ff9b04-cd51-4780-b23a-e25b570240d6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.007805595s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-580872 exec busybox -- /bin/sh -c "ulimit -n"
E0229 18:36:45.027620   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-580872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-580872 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-154269 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e2af253-29bd-4cfd-874d-a06f44f844ee] Pending
helpers_test.go:344: "busybox" [9e2af253-29bd-4cfd-874d-a06f44f844ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e2af253-29bd-4cfd-874d-a06f44f844ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005628023s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-154269 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-580872 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-580872 --alsologtostderr -v=3: (13.135789482s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-154269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-154269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.230245876s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-154269 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-154269 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-154269 --alsologtostderr -v=3: (13.138214204s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-270866 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22763119-70d1-4c45-852f-2d74027df567] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [22763119-70d1-4c45-852f-2d74027df567] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004932564s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-270866 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580872 -n no-preload-580872
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580872 -n no-preload-580872: exit status 7 (86.491554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-580872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (319.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-580872 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0229 18:36:59.581280   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-580872 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (5m18.862808715s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-580872 -n no-preload-580872
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (319.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-270866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0229 18:37:06.913675   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:06.919030   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:06.929428   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:06.949816   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:06.990197   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:07.070615   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-270866 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087819305s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-270866 describe deploy/metrics-server -n kube-system
E0229 18:37:07.231667   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-270866 --alsologtostderr -v=3
E0229 18:37:07.552430   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:08.192708   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-270866 --alsologtostderr -v=3: (13.157530527s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-154269 -n embed-certs-154269
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-154269 -n embed-certs-154269: exit status 7 (86.280362ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-154269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (317.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-154269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
E0229 18:37:09.473739   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:12.033917   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:17.154235   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-154269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (5m16.812747885s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-154269 -n embed-certs-154269
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (317.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866: exit status 7 (87.085482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-270866 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-270866 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E0229 18:37:27.395002   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:37:40.542624   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/calico-911469/client.crt: no such file or directory
E0229 18:37:47.875814   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:38:01.048881   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/auto-911469/client.crt: no such file or directory
E0229 18:38:01.105169   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.110533   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.120875   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.141187   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.181533   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.261914   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.422224   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:01.742391   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:02.383103   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:03.663870   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:06.224920   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:06.948134   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/kindnet-911469/client.crt: no such file or directory
E0229 18:38:08.804979   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
E0229 18:38:11.345191   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:21.585536   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/false-911469/client.crt: no such file or directory
E0229 18:38:24.374642   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.379889   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.390189   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.410531   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.450780   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.531132   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:24.691580   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:25.012234   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:25.652986   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:26.933493   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:28.836781   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
E0229 18:38:29.493736   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:32.449286   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.454598   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.464868   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.485173   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.525455   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.605807   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:32.766196   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:33.086868   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:33.727014   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:34.614451   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
E0229 18:38:35.007951   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
E0229 18:38:36.488713   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/gvisor-859306/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-270866 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (10m19.613875742s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (619.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-467811 --alsologtostderr -v=3
E0229 18:40:09.379440   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 18:40:09.673666   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-467811 --alsologtostderr -v=3: (2.144754859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-467811 -n old-k8s-version-467811: exit status 7 (80.093313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-467811 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-swqj7" [9bf5d02e-f190-48ba-9311-06c7cc98ffde] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-swqj7" [9bf5d02e-f190-48ba-9311-06c7cc98ffde] Running
E0229 18:42:34.598470   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/custom-flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.005295613s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vpgsf" [82cb26a0-11c9-464e-9439-bc5575cf0ca3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005129205s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vpgsf" [82cb26a0-11c9-464e-9439-bc5575cf0ca3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005312639s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-154269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-154269 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-154269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-154269 -n embed-certs-154269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-154269 -n embed-certs-154269: exit status 2 (280.244904ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-154269 -n embed-certs-154269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-154269 -n embed-certs-154269: exit status 2 (289.548896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-154269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-154269 -n embed-certs-154269
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-154269 -n embed-certs-154269
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-swqj7" [9bf5d02e-f190-48ba-9311-06c7cc98ffde] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006470131s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-580872 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-555986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-555986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (1m9.985068574s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-580872 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-580872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580872 -n no-preload-580872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580872 -n no-preload-580872: exit status 2 (283.791343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580872 -n no-preload-580872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580872 -n no-preload-580872: exit status 2 (270.298234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-580872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-580872 -n no-preload-580872
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-580872 -n no-preload-580872
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-555986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0229 18:43:52.057637   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/enable-default-cni-911469/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-555986 --alsologtostderr -v=3
E0229 18:44:00.131791   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/flannel-911469/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-555986 --alsologtostderr -v=3: (13.123397391s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555986 -n newest-cni-555986
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555986 -n newest-cni-555986: exit status 7 (74.880793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-555986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-555986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2
E0229 18:44:28.711006   13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/bridge-911469/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-555986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.29.0-rc.2: (45.176691616s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-555986 -n newest-cni-555986
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-555986 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-555986 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555986 -n newest-cni-555986
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555986 -n newest-cni-555986: exit status 2 (271.065675ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555986 -n newest-cni-555986
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555986 -n newest-cni-555986: exit status 2 (271.301255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-555986 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-555986 -n newest-cni-555986
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-555986 -n newest-cni-555986
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gc2cf" [d535e9f2-f11e-448d-9511-4d2c8d4a9a1a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005581295s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gc2cf" [d535e9f2-f11e-448d-9511-4d2c8d4a9a1a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007023639s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-270866 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-270866 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-270866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866: exit status 2 (248.277983ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866: exit status 2 (246.091626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-270866 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-270866 -n default-k8s-diff-port-270866
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.49s)

                                                
                                    

Test skip (34/330)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
171 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/cilium 4.02
266 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-911469 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-911469" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-911469

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-911469" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911469"

                                                
                                                
----------------------- debugLogs end: cilium-911469 [took: 3.851559115s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-911469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-911469
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-375029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-375029
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard